Test Report: Hyper-V_Windows 19011

                    
                      86685d02f89d02484c16ac75ab0cd1e5f6c63d49:2024-06-03:34742
                    
                

Test fail (37/190)

Order failed test Duration
29 TestAddons/parallel/Registry 73.91
56 TestErrorSpam/setup 197.61
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 33.81
81 TestFunctional/serial/ExtraConfig 282.89
82 TestFunctional/serial/ComponentHealth 180.48
85 TestFunctional/serial/InvalidService 4.23
87 TestFunctional/parallel/ConfigCmd 1.48
91 TestFunctional/parallel/StatusCmd 302.1
95 TestFunctional/parallel/ServiceCmdConnect 187.24
97 TestFunctional/parallel/PersistentVolumeClaim 491.49
101 TestFunctional/parallel/MySQL 230.7
107 TestFunctional/parallel/NodeLabels 241.09
112 TestFunctional/parallel/ServiceCmd/DeployApp 2.23
113 TestFunctional/parallel/ServiceCmd/List 7.48
114 TestFunctional/parallel/ServiceCmd/JSONOutput 7.67
115 TestFunctional/parallel/ServiceCmd/HTTPS 7.66
116 TestFunctional/parallel/ServiceCmd/Format 7.74
117 TestFunctional/parallel/ServiceCmd/URL 7.42
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.55
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.27
130 TestFunctional/parallel/ImageCommands/ImageListShort 59.96
131 TestFunctional/parallel/ImageCommands/ImageListTable 60.26
132 TestFunctional/parallel/ImageCommands/ImageListJson 60.27
133 TestFunctional/parallel/ImageCommands/ImageListYaml 59.99
134 TestFunctional/parallel/ImageCommands/ImageBuild 120.7
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 74.97
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.5
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.49
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.33
144 TestFunctional/parallel/DockerEnv/powershell 432.58
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.39
158 TestMultiControlPlane/serial/PingHostFromPods 69.4
163 TestMultiControlPlane/serial/StopSecondaryNode 94.53
219 TestMultiNode/serial/PingHostFrom2Pods 57.36
226 TestMultiNode/serial/RestartKeepsNodes 491.36
251 TestNoKubernetes/serial/StartWithK8s 303.48
259 TestPause/serial/DeletePaused 10800.473
x
+
TestAddons/parallel/Registry (73.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 27.7464ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4mwfz" [04ed4d5a-632f-444a-b01c-23b8e51aaa10] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0248533s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v26mc" [77d080cc-0158-445b-ac0f-a5c067638727] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0101025s
addons_test.go:342: (dbg) Run:  kubectl --context addons-975100 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-975100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-975100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.2482977s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 ip: (2.9127891s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0603 12:30:31.426923    4372 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-975100 ip"
2024/06/03 12:30:34 [DEBUG] GET http://172.22.146.54:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable registry --alsologtostderr -v=1: (16.1021431s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-975100 -n addons-975100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-975100 -n addons-975100: (13.352321s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 logs -n 25: (9.8298257s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | -p download-only-687900              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| delete  | -p download-only-687900              | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| start   | -o=json --download-only              | download-only-633500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | -p download-only-633500              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| delete  | -p download-only-633500              | download-only-633500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| delete  | -p download-only-687900              | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| delete  | -p download-only-633500              | download-only-633500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| start   | --download-only -p                   | binary-mirror-022300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | binary-mirror-022300                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:60183               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-022300              | binary-mirror-022300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| addons  | enable dashboard -p                  | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | addons-975100                        |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | addons-975100                        |                      |                   |         |                     |                     |
	| start   | -p addons-975100 --wait=true         | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:30 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | -p addons-975100                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | -p addons-975100                     |                      |                   |         |                     |                     |
	| ip      | addons-975100 ip                     | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	| addons  | addons-975100 addons disable         | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-975100        | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:30 UTC |                     |
	|         | addons-975100                        |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:22:52
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:22:52.183406     196 out.go:291] Setting OutFile to fd 772 ...
	I0603 12:22:52.184685     196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:22:52.184753     196 out.go:304] Setting ErrFile to fd 756...
	I0603 12:22:52.184753     196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:22:52.206972     196 out.go:298] Setting JSON to false
	I0603 12:22:52.209836     196 start.go:129] hostinfo: {"hostname":"minikube3","uptime":18300,"bootTime":1717399071,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:22:52.209836     196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:22:52.216743     196 out.go:177] * [addons-975100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:22:52.221062     196 notify.go:220] Checking for updates...
	I0603 12:22:52.222903     196 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:22:52.225953     196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:22:52.228620     196 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:22:52.231090     196 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:22:52.233311     196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:22:52.236447     196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:22:57.537559     196 out.go:177] * Using the hyperv driver based on user configuration
	I0603 12:22:57.541090     196 start.go:297] selected driver: hyperv
	I0603 12:22:57.541090     196 start.go:901] validating driver "hyperv" against <nil>
	I0603 12:22:57.541090     196 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:22:57.591629     196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:22:57.592864     196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:22:57.592864     196 cni.go:84] Creating CNI manager for ""
	I0603 12:22:57.592864     196 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:22:57.592864     196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:22:57.592864     196 start.go:340] cluster config:
	{Name:addons-975100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-975100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:22:57.593491     196 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:22:57.598319     196 out.go:177] * Starting "addons-975100" primary control-plane node in "addons-975100" cluster
	I0603 12:22:57.601025     196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:22:57.601025     196 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:22:57.601025     196 cache.go:56] Caching tarball of preloaded images
	I0603 12:22:57.601552     196 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:22:57.601728     196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:22:57.601728     196 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\config.json ...
	I0603 12:22:57.602485     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\config.json: {Name:mk9766447a0447abad5588476862ce923f37eac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:22:57.603245     196 start.go:360] acquireMachinesLock for addons-975100: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:22:57.603923     196 start.go:364] duration metric: took 144.8µs to acquireMachinesLock for "addons-975100"
	I0603 12:22:57.603964     196 start.go:93] Provisioning new machine with config: &{Name:addons-975100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-975100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 12:22:57.603964     196 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 12:22:57.605998     196 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 12:22:57.606976     196 start.go:159] libmachine.API.Create for "addons-975100" (driver="hyperv")
	I0603 12:22:57.606976     196 client.go:168] LocalClient.Create starting
	I0603 12:22:57.607971     196 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 12:22:57.809975     196 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 12:22:57.966730     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 12:23:00.066026     196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 12:23:00.066026     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:00.066156     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 12:23:01.825000     196 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 12:23:01.825000     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:01.825212     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 12:23:03.276134     196 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 12:23:03.276370     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:03.276370     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 12:23:06.943548     196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 12:23:06.943548     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:06.945575     196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:23:07.395303     196 main.go:141] libmachine: Creating SSH key...
	I0603 12:23:07.783905     196 main.go:141] libmachine: Creating VM...
	I0603 12:23:07.783905     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 12:23:10.636442     196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 12:23:10.637017     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:10.637017     196 main.go:141] libmachine: Using switch "Default Switch"
	I0603 12:23:10.637156     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 12:23:12.337343     196 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 12:23:12.337554     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:12.337554     196 main.go:141] libmachine: Creating VHD
	I0603 12:23:12.337749     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 12:23:16.074435     196 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0529D55A-CD90-4271-ABDE-9C18861BD150
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 12:23:16.074435     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:16.074527     196 main.go:141] libmachine: Writing magic tar header
	I0603 12:23:16.074634     196 main.go:141] libmachine: Writing SSH key tar header
	I0603 12:23:16.084025     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 12:23:19.275121     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:19.275121     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:19.275121     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\disk.vhd' -SizeBytes 20000MB
	I0603 12:23:21.808607     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:21.808607     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:21.808607     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-975100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0603 12:23:25.463982     196 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-975100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 12:23:25.463982     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:25.464661     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-975100 -DynamicMemoryEnabled $false
	I0603 12:23:27.669686     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:27.670206     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:27.670206     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-975100 -Count 2
	I0603 12:23:29.817114     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:29.817176     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:29.817283     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-975100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\boot2docker.iso'
	I0603 12:23:32.398224     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:32.398938     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:32.399038     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-975100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\disk.vhd'
	I0603 12:23:35.009035     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:35.009035     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:35.009035     196 main.go:141] libmachine: Starting VM...
	I0603 12:23:35.009959     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-975100
	I0603 12:23:38.120413     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:38.120413     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:38.120413     196 main.go:141] libmachine: Waiting for host to start...
	I0603 12:23:38.120413     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:23:40.437238     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:23:40.437441     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:40.437533     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:23:42.965932     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:42.966044     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:43.967753     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:23:46.177503     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:23:46.178140     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:46.178319     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:23:48.700255     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:48.700255     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:49.703781     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:23:51.931359     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:23:51.931359     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:51.932169     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:23:54.462094     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:23:54.463127     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:55.472174     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:23:57.659182     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:23:57.659182     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:23:57.659748     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:00.185967     196 main.go:141] libmachine: [stdout =====>] : 
	I0603 12:24:00.185967     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:01.190769     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:03.384805     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:03.384805     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:03.385177     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:05.961939     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:05.962129     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:05.962311     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:08.060346     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:08.061063     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:08.061185     196 machine.go:94] provisionDockerMachine start ...
	I0603 12:24:08.061375     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:10.221295     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:10.221295     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:10.221776     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:12.724093     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:12.724934     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:12.733251     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:12.742555     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:12.742555     196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:24:12.869650     196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 12:24:12.869650     196 buildroot.go:166] provisioning hostname "addons-975100"
	I0603 12:24:12.870262     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:14.969391     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:14.969391     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:14.969647     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:17.480725     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:17.480995     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:17.486351     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:17.486985     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:17.487072     196 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-975100 && echo "addons-975100" | sudo tee /etc/hostname
	I0603 12:24:17.653247     196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-975100
	
	I0603 12:24:17.653417     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:19.851913     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:19.851913     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:19.852694     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:22.348141     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:22.348141     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:22.354755     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:22.354755     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:22.354755     196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-975100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-975100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-975100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:24:22.513357     196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:24:22.513533     196 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:24:22.513533     196 buildroot.go:174] setting up certificates
	I0603 12:24:22.513533     196 provision.go:84] configureAuth start
	I0603 12:24:22.513621     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:24.624014     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:24.624496     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:24.624587     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:27.159252     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:27.159252     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:27.159252     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:29.245106     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:29.245106     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:29.246129     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:31.745138     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:31.745398     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:31.745398     196 provision.go:143] copyHostCerts
	I0603 12:24:31.746066     196 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:24:31.747653     196 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:24:31.749117     196 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:24:31.750337     196 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-975100 san=[127.0.0.1 172.22.146.54 addons-975100 localhost minikube]
	I0603 12:24:31.886029     196 provision.go:177] copyRemoteCerts
	I0603 12:24:31.900849     196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:24:31.900849     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:34.014234     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:34.014706     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:34.014706     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:36.497636     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:36.497848     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:36.498090     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:24:36.604140     196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7032516s)
	I0603 12:24:36.604414     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:24:36.651572     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:24:36.700864     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:24:36.751586     196 provision.go:87] duration metric: took 14.2379328s to configureAuth
	I0603 12:24:36.751586     196 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:24:36.752691     196 config.go:182] Loaded profile config "addons-975100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:24:36.752691     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:38.921151     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:38.921151     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:38.922209     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:41.403068     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:41.403958     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:41.409283     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:41.409863     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:41.409863     196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:24:41.541891     196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:24:41.542105     196 buildroot.go:70] root file system type: tmpfs
	I0603 12:24:41.542368     196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:24:41.542368     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:43.618784     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:43.618907     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:43.618979     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:46.133326     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:46.133613     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:46.139080     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:46.139601     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:46.139808     196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:24:46.292844     196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:24:46.293004     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:48.392853     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:48.393222     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:48.393222     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:50.867154     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:50.867154     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:50.875242     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.875242     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:24:50.875847     196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:24:52.963751     196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 12:24:52.963751     196 machine.go:97] duration metric: took 44.9021908s to provisionDockerMachine
	I0603 12:24:52.963751     196 client.go:171] duration metric: took 1m55.3558079s to LocalClient.Create
	I0603 12:24:52.963751     196 start.go:167] duration metric: took 1m55.3558079s to libmachine.API.Create "addons-975100"
	I0603 12:24:52.963751     196 start.go:293] postStartSetup for "addons-975100" (driver="hyperv")
	I0603 12:24:52.963751     196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:24:52.976790     196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:24:52.976790     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:55.070400     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:55.070400     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:55.071189     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:24:57.533082     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:24:57.533291     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:57.533462     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:24:57.637287     196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6604577s)
	I0603 12:24:57.650350     196 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:24:57.656785     196 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:24:57.656785     196 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:24:57.657388     196 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:24:57.657606     196 start.go:296] duration metric: took 4.6938158s for postStartSetup
	I0603 12:24:57.660021     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:24:59.759867     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:24:59.760128     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:24:59.760207     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:02.326376     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:02.326750     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:02.326750     196 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\config.json ...
	I0603 12:25:02.329838     196 start.go:128] duration metric: took 2m4.7248283s to createHost
	I0603 12:25:02.330002     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:25:04.427208     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:25:04.427208     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:04.428125     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:06.928057     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:06.928057     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:06.933787     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:25:06.933816     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:25:06.933816     196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:25:07.072754     196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417507.084738721
	
	I0603 12:25:07.072754     196 fix.go:216] guest clock: 1717417507.084738721
	I0603 12:25:07.072754     196 fix.go:229] Guest: 2024-06-03 12:25:07.084738721 +0000 UTC Remote: 2024-06-03 12:25:02.3299124 +0000 UTC m=+130.306649901 (delta=4.754826321s)
	I0603 12:25:07.073304     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:25:09.163067     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:25:09.163067     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:09.163168     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:11.703398     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:11.703398     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:11.709903     196 main.go:141] libmachine: Using SSH client type: native
	I0603 12:25:11.710570     196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.54 22 <nil> <nil>}
	I0603 12:25:11.710570     196 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717417507
	I0603 12:25:11.857934     196 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:25:07 UTC 2024
	
	I0603 12:25:11.857934     196 fix.go:236] clock set: Mon Jun  3 12:25:07 UTC 2024
	 (err=<nil>)
	I0603 12:25:11.857934     196 start.go:83] releasing machines lock for "addons-975100", held for 2m14.2528452s
	I0603 12:25:11.857934     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:25:13.925783     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:25:13.925783     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:13.926344     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:16.428464     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:16.428464     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:16.434374     196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:25:16.434927     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:25:16.444089     196 ssh_runner.go:195] Run: cat /version.json
	I0603 12:25:16.444089     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:25:18.592663     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:25:18.592753     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:18.592753     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:18.592753     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:25:18.592753     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:18.592753     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:25:21.208640     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:21.208849     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:21.209005     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:25:21.229365     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:25:21.230253     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:25:21.230253     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:25:21.309171     196 ssh_runner.go:235] Completed: cat /version.json: (4.8650414s)
	I0603 12:25:21.320525     196 ssh_runner.go:195] Run: systemctl --version
	I0603 12:25:21.386777     196 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9523615s)
	I0603 12:25:21.399164     196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:25:21.407760     196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:25:21.419754     196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:25:21.449582     196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:25:21.449634     196 start.go:494] detecting cgroup driver to use...
	I0603 12:25:21.449634     196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:25:21.494073     196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:25:21.525523     196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:25:21.543566     196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:25:21.553886     196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:25:21.585214     196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:25:21.616151     196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:25:21.644022     196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:25:21.672819     196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:25:21.704890     196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:25:21.738350     196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:25:21.767179     196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:25:21.798179     196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:25:21.825281     196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:25:21.854647     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:22.041357     196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:25:22.075852     196 start.go:494] detecting cgroup driver to use...
	I0603 12:25:22.089121     196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:25:22.124325     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:25:22.157718     196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:25:22.202295     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:25:22.238397     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:25:22.276546     196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 12:25:22.339599     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:25:22.363147     196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:25:22.410053     196 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:25:22.428047     196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:25:22.447428     196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:25:22.489058     196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:25:22.690614     196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:25:22.873819     196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:25:22.874148     196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:25:22.919525     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:23.127522     196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:25:25.622860     196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4952892s)
	I0603 12:25:25.634823     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 12:25:25.667504     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 12:25:25.702522     196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 12:25:25.905505     196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 12:25:26.125755     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:26.321220     196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 12:25:26.359598     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 12:25:26.394037     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:26.587956     196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 12:25:26.691686     196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 12:25:26.704352     196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 12:25:26.713672     196 start.go:562] Will wait 60s for crictl version
	I0603 12:25:26.724655     196 ssh_runner.go:195] Run: which crictl
	I0603 12:25:26.742049     196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:25:26.791128     196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 12:25:26.801866     196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 12:25:26.842848     196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 12:25:26.875350     196 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 12:25:26.875350     196 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 12:25:26.879405     196 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 12:25:26.879405     196 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 12:25:26.879405     196 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 12:25:26.879405     196 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 12:25:26.882474     196 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 12:25:26.882474     196 ip.go:210] interface addr: 172.22.144.1/20
	I0603 12:25:26.894140     196 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 12:25:26.900212     196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:25:26.922774     196 kubeadm.go:877] updating cluster {Name:addons-975100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-975100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.54 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:25:26.922774     196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:25:26.931789     196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 12:25:26.953331     196 docker.go:685] Got preloaded images: 
	I0603 12:25:26.953331     196 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 12:25:26.964374     196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 12:25:26.995324     196 ssh_runner.go:195] Run: which lz4
	I0603 12:25:27.013728     196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:25:27.020017     196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:25:27.020409     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 12:25:28.823689     196 docker.go:649] duration metric: took 1.8228399s to copy over tarball
	I0603 12:25:28.835703     196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:25:34.208856     196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.3731082s)
	I0603 12:25:34.208856     196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:25:34.268682     196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 12:25:34.286986     196 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 12:25:34.333134     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:34.551797     196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:25:40.124416     196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.5725728s)
	I0603 12:25:40.134599     196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 12:25:40.160167     196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 12:25:40.160316     196 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:25:40.160316     196 kubeadm.go:928] updating node { 172.22.146.54 8443 v1.30.1 docker true true} ...
	I0603 12:25:40.160393     196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-975100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.146.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-975100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:25:40.169735     196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 12:25:40.201318     196 cni.go:84] Creating CNI manager for ""
	I0603 12:25:40.201386     196 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:25:40.201497     196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:25:40.201550     196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.146.54 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-975100 NodeName:addons-975100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.146.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.146.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:25:40.201725     196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.146.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-975100"
	  kubeletExtraArgs:
	    node-ip: 172.22.146.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.146.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:25:40.213784     196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:25:40.230674     196 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:25:40.241793     196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:25:40.257175     196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:25:40.287281     196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:25:40.317250     196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0603 12:25:40.360555     196 ssh_runner.go:195] Run: grep 172.22.146.54	control-plane.minikube.internal$ /etc/hosts
	I0603 12:25:40.366322     196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.146.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:25:40.397340     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:40.574065     196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:25:40.604988     196 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100 for IP: 172.22.146.54
	I0603 12:25:40.605057     196 certs.go:194] generating shared ca certs ...
	I0603 12:25:40.605115     196 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:40.605585     196 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 12:25:40.719826     196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt ...
	I0603 12:25:40.719826     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt: {Name:mk1d1f25727e6fcaf35d7d74de783ad2d2c6be81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:40.721827     196 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key ...
	I0603 12:25:40.721827     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key: {Name:mkffeaed7182692572a4aaea1f77b60f45c78854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:40.722934     196 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 12:25:40.962004     196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0603 12:25:40.962004     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkc09bedb222360a1dcc92648b423932b0197d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:40.963877     196 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key ...
	I0603 12:25:40.963877     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk23d29d7cc073007c63c291d9cf6fa322998d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:40.965173     196 certs.go:256] generating profile certs ...
	I0603 12:25:40.965173     196 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.key
	I0603 12:25:40.965173     196 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt with IP's: []
	I0603 12:25:41.250500     196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt ...
	I0603 12:25:41.250500     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: {Name:mke65b0f331eca7fb907b5324942fa94afc64cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:41.252208     196 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.key ...
	I0603 12:25:41.252208     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.key: {Name:mk2ea80a4b7901044fb8aa2da1d0a1f1c40be73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:41.253686     196 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key.cceb77f9
	I0603 12:25:41.254183     196 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt.cceb77f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.146.54]
	I0603 12:25:41.852296     196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt.cceb77f9 ...
	I0603 12:25:41.852296     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt.cceb77f9: {Name:mk6fd589ea17b0904f4c360ea996f519428df1f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:41.853312     196 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key.cceb77f9 ...
	I0603 12:25:41.853312     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key.cceb77f9: {Name:mkc524c0b1d7de7bf2067c79127fc531751feb19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:41.854336     196 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt.cceb77f9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt
	I0603 12:25:41.867202     196 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key.cceb77f9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key
	I0603 12:25:41.868210     196 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.key
	I0603 12:25:41.868210     196 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.crt with IP's: []
	I0603 12:25:42.355883     196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.crt ...
	I0603 12:25:42.355883     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.crt: {Name:mk3cbca08bbc24f827aebc1ee3bd93f55d1f8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:42.358080     196 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.key ...
	I0603 12:25:42.358080     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.key: {Name:mk01cf0b29771b8aa682b3c6cc09a1d69b4ab27f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:42.369514     196 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 12:25:42.370644     196 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 12:25:42.370794     196 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 12:25:42.371020     196 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 12:25:42.372299     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:25:42.416797     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:25:42.461862     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:25:42.507335     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:25:42.552067     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 12:25:42.595644     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:25:42.643038     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:25:42.686093     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:25:42.729795     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:25:42.772949     196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:25:42.813284     196 ssh_runner.go:195] Run: openssl version
	I0603 12:25:42.832602     196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:25:42.861991     196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:25:42.869289     196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:25:42.883234     196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:25:42.904968     196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:25:42.937196     196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:25:42.944205     196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:25:42.944565     196 kubeadm.go:391] StartCluster: {Name:addons-975100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-975100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.54 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:25:42.953155     196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 12:25:42.984223     196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:25:43.017242     196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:25:43.046859     196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:25:43.066304     196 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:25:43.066304     196 kubeadm.go:156] found existing configuration files:
	
	I0603 12:25:43.077863     196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:25:43.093781     196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:25:43.105813     196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:25:43.133842     196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:25:43.149440     196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:25:43.161724     196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:25:43.190230     196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:25:43.206615     196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:25:43.219552     196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:25:43.251588     196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:25:43.267813     196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:25:43.280323     196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:25:43.296696     196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:25:43.550509     196 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:25:56.723005     196 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:25:56.723172     196 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:25:56.723416     196 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:25:56.723731     196 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:25:56.723937     196 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:25:56.723937     196 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:25:56.727853     196 out.go:204]   - Generating certificates and keys ...
	I0603 12:25:56.727853     196 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:25:56.728386     196 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:25:56.728650     196 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 12:25:56.728650     196 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 12:25:56.728650     196 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 12:25:56.728650     196 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 12:25:56.729187     196 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 12:25:56.729808     196 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-975100 localhost] and IPs [172.22.146.54 127.0.0.1 ::1]
	I0603 12:25:56.730006     196 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 12:25:56.730182     196 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-975100 localhost] and IPs [172.22.146.54 127.0.0.1 ::1]
	I0603 12:25:56.730182     196 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 12:25:56.730182     196 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 12:25:56.730774     196 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 12:25:56.731123     196 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:25:56.731266     196 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:25:56.731420     196 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:25:56.731420     196 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:25:56.731420     196 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:25:56.731420     196 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:25:56.732144     196 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:25:56.732399     196 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:25:56.735979     196 out.go:204]   - Booting up control plane ...
	I0603 12:25:56.736240     196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:25:56.736240     196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:25:56.736240     196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:25:56.736240     196 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:25:56.736240     196 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:25:56.736240     196 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:25:56.737313     196 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:25:56.737550     196 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:25:56.737690     196 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001472316s
	I0603 12:25:56.737944     196 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:25:56.738121     196 kubeadm.go:309] [api-check] The API server is healthy after 6.50262722s
	I0603 12:25:56.738459     196 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:25:56.738801     196 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:25:56.739003     196 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:25:56.739513     196 kubeadm.go:309] [mark-control-plane] Marking the node addons-975100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:25:56.739513     196 kubeadm.go:309] [bootstrap-token] Using token: 56hne6.vfiohhxxkyaegf05
	I0603 12:25:56.743628     196 out.go:204]   - Configuring RBAC rules ...
	I0603 12:25:56.744691     196 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:25:56.744691     196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:25:56.744691     196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:25:56.745464     196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:25:56.745679     196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:25:56.745940     196 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:25:56.746194     196 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:25:56.746194     196 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:25:56.746416     196 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:25:56.746416     196 kubeadm.go:309] 
	I0603 12:25:56.746416     196 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:25:56.746416     196 kubeadm.go:309] 
	I0603 12:25:56.746416     196 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:25:56.746416     196 kubeadm.go:309] 
	I0603 12:25:56.746416     196 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:25:56.746979     196 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:25:56.747159     196 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:25:56.747230     196 kubeadm.go:309] 
	I0603 12:25:56.747354     196 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:25:56.747354     196 kubeadm.go:309] 
	I0603 12:25:56.747354     196 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:25:56.747354     196 kubeadm.go:309] 
	I0603 12:25:56.747354     196 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:25:56.747354     196 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:25:56.747941     196 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:25:56.747941     196 kubeadm.go:309] 
	I0603 12:25:56.748043     196 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:25:56.748043     196 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:25:56.748043     196 kubeadm.go:309] 
	I0603 12:25:56.748608     196 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 56hne6.vfiohhxxkyaegf05 \
	I0603 12:25:56.748707     196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f \
	I0603 12:25:56.748707     196 kubeadm.go:309] 	--control-plane 
	I0603 12:25:56.748707     196 kubeadm.go:309] 
	I0603 12:25:56.749055     196 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:25:56.749055     196 kubeadm.go:309] 
	I0603 12:25:56.749319     196 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 56hne6.vfiohhxxkyaegf05 \
	I0603 12:25:56.749434     196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 12:25:56.749434     196 cni.go:84] Creating CNI manager for ""
	I0603 12:25:56.749434     196 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:25:56.754766     196 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:25:56.768752     196 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:25:56.786964     196 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:25:56.821196     196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:25:56.836960     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-975100 minikube.k8s.io/updated_at=2024_06_03T12_25_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=addons-975100 minikube.k8s.io/primary=true
	I0603 12:25:56.836960     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:56.845391     196 ops.go:34] apiserver oom_adj: -16
	I0603 12:25:57.019961     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:57.534136     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:58.036864     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:58.528335     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:59.032577     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:59.533039     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:00.034756     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:00.522716     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:01.034434     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:01.533309     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:02.034183     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:02.537182     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:03.022648     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:03.526700     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:04.025184     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:04.527840     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:05.030119     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:05.536353     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:06.024354     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:06.527804     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:07.027917     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:07.528981     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:08.027135     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:08.527037     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:09.032805     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:09.526075     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:10.030342     196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:26:10.167950     196 kubeadm.go:1107] duration metric: took 13.3466422s to wait for elevateKubeSystemPrivileges
	W0603 12:26:10.168193     196 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:26:10.168281     196 kubeadm.go:393] duration metric: took 27.2234605s to StartCluster
	I0603 12:26:10.168281     196 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:26:10.168519     196 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:26:10.169211     196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:26:10.171202     196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 12:26:10.171395     196 start.go:234] Will wait 6m0s for node &{Name: IP:172.22.146.54 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 12:26:10.175802     196 out.go:177] * Verifying Kubernetes components...
	I0603 12:26:10.171395     196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 12:26:10.171833     196 config.go:182] Loaded profile config "addons-975100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:26:10.180790     196 addons.go:69] Setting yakd=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting helm-tiller=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting registry=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:234] Setting addon registry=true in "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting metrics-server=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting storage-provisioner=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting volcano=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting inspektor-gadget=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:234] Setting addon storage-provisioner=true in "addons-975100"
	I0603 12:26:10.180790     196 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-975100"
	I0603 12:26:10.180790     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-975100"
	I0603 12:26:10.180790     196 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-975100"
	I0603 12:26:10.180790     196 addons.go:69] Setting gcp-auth=true in profile "addons-975100"
	I0603 12:26:10.180790     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 mustload.go:65] Loading cluster: addons-975100
	I0603 12:26:10.180790     196 addons.go:234] Setting addon helm-tiller=true in "addons-975100"
	I0603 12:26:10.180790     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:234] Setting addon metrics-server=true in "addons-975100"
	I0603 12:26:10.181799     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting default-storageclass=true in profile "addons-975100"
	I0603 12:26:10.181799     196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-975100"
	I0603 12:26:10.181799     196 config.go:182] Loaded profile config "addons-975100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:26:10.180790     196 addons.go:234] Setting addon volcano=true in "addons-975100"
	I0603 12:26:10.181799     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting ingress-dns=true in profile "addons-975100"
	I0603 12:26:10.182795     196 addons.go:234] Setting addon ingress-dns=true in "addons-975100"
	I0603 12:26:10.182795     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting ingress=true in profile "addons-975100"
	I0603 12:26:10.182795     196 addons.go:234] Setting addon ingress=true in "addons-975100"
	I0603 12:26:10.182795     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-975100"
	I0603 12:26:10.182795     196 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-975100"
	I0603 12:26:10.180790     196 addons.go:234] Setting addon yakd=true in "addons-975100"
	I0603 12:26:10.182795     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting cloud-spanner=true in profile "addons-975100"
	I0603 12:26:10.183798     196 addons.go:234] Setting addon cloud-spanner=true in "addons-975100"
	I0603 12:26:10.183798     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:234] Setting addon inspektor-gadget=true in "addons-975100"
	I0603 12:26:10.183798     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.180790     196 addons.go:69] Setting volumesnapshots=true in profile "addons-975100"
	I0603 12:26:10.184815     196 addons.go:234] Setting addon volumesnapshots=true in "addons-975100"
	I0603 12:26:10.184815     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:10.185796     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.185796     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.186821     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.186821     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.186821     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.187800     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.188806     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.188806     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.189803     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.189803     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.190804     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.190804     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.190804     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.191840     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.191840     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.192798     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:10.203555     196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:26:11.420299     196 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.2490866s)
	I0603 12:26:11.420968     196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.22.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 12:26:11.420968     196 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.2174025s)
	I0603 12:26:11.442278     196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:26:12.968469     196 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.22.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5474884s)
	I0603 12:26:12.968672     196 start.go:946] {"host.minikube.internal": 172.22.144.1} host record injected into CoreDNS's ConfigMap
	I0603 12:26:12.974708     196 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.5324177s)
	I0603 12:26:12.977791     196 node_ready.go:35] waiting up to 6m0s for node "addons-975100" to be "Ready" ...
	I0603 12:26:13.357781     196 node_ready.go:49] node "addons-975100" has status "Ready":"True"
	I0603 12:26:13.357781     196 node_ready.go:38] duration metric: took 379.9861ms for node "addons-975100" to be "Ready" ...
	I0603 12:26:13.357781     196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:26:13.437791     196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace to be "Ready" ...
	I0603 12:26:13.799997     196 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-975100" context rescaled to 1 replicas
	I0603 12:26:15.822569     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:16.816299     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:16.816299     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:16.835810     196 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 12:26:16.864812     196 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:26:16.864812     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 12:26:16.864812     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.135527     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.135527     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.140291     196 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 12:26:17.146611     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 12:26:17.146611     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.151616     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.152621     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 12:26:17.155031     196 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-975100"
	I0603 12:26:17.157459     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 12:26:17.157459     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:17.162874     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 12:26:17.161652     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.169882     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 12:26:17.175871     196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 12:26:17.178857     196 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 12:26:17.198874     196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 12:26:17.198874     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 12:26:17.198874     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.227763     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.227763     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.235038     196 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 12:26:17.238129     196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 12:26:17.238129     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 12:26:17.238129     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.237062     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.241063     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.247059     196 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 12:26:17.250057     196 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:26:17.250057     196 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:26:17.251049     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.379311     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.379311     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.387172     196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:26:17.382185     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.383172     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.395162     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.397275     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.400290     196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:26:17.397543     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.397543     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.399821     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.401701     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.410185     196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:26:17.413140     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:26:17.413140     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.416895     196 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0603 12:26:17.413140     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.413140     196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:26:17.413140     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.420635     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:17.432577     196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 12:26:17.435561     196 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 12:26:17.460494     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 12:26:17.460494     196 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 12:26:17.460494     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.435561     196 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:26:17.462312     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 12:26:17.435561     196 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0603 12:26:17.438560     196 addons.go:234] Setting addon default-storageclass=true in "addons-975100"
	I0603 12:26:17.462312     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:17.478009     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.517970     196 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0603 12:26:17.514315     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.619102     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.619102     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.623100     196 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 12:26:17.632105     196 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 12:26:17.632105     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 12:26:17.632105     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.689616     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.689616     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.692618     196 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 12:26:17.695611     196 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 12:26:17.695611     196 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 12:26:17.695611     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.738773     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.738773     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.742782     196 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0603 12:26:17.748787     196 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 12:26:17.748787     196 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 12:26:17.748787     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.798049     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:17.798049     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:17.829052     196 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 12:26:17.874053     196 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 12:26:17.901875     196 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 12:26:17.902864     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 12:26:17.902864     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:17.934177     196 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0603 12:26:17.934177     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0603 12:26:17.934177     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:18.118664     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:18.996811     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:18.996811     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:19.000083     196 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 12:26:19.002174     196 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:26:19.002769     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 12:26:19.002769     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:20.677697     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:23.057091     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:23.606348     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.606348     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.606536     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:23.607143     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.607143     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.607143     196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:26:23.607143     196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:26:23.607143     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:23.683966     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.683966     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.685380     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:23.705327     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.706334     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.706334     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:23.728328     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.728328     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.728328     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:23.762314     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.762314     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.762314     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:23.834624     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:23.834624     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:23.834624     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:24.321035     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.322056     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.322056     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:24.353052     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.353052     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.353052     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:24.363054     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.363054     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.443500     196 out.go:177]   - Using image docker.io/busybox:stable
	I0603 12:26:24.584127     196 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 12:26:24.642113     196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:26:24.642113     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 12:26:24.642113     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:24.606108     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.661717     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.661717     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:24.628112     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.667359     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.667359     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:24.769717     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:24.769717     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:24.770736     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:25.536922     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:25.545892     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:25.545892     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:25.545892     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:26.765379     196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 12:26:26.765379     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:27.941758     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:28.667805     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:28.667805     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:28.667805     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:30.015951     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:30.169538     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:30.169538     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:30.169538     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:30.783522     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:30.783522     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:30.783522     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:30.837455     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:30.837455     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:30.837947     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.191025     196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 12:26:31.191025     196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 12:26:31.275033     196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 12:26:31.275033     196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 12:26:31.369512     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.369512     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.370520     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.382926     196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 12:26:31.382926     196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 12:26:31.423083     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.423083     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.423083     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.527176     196 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 12:26:31.527176     196 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 12:26:31.609348     196 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:26:31.609348     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 12:26:31.702608     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:26:31.716876     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.716876     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.716876     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.772116     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.772116     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.772116     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.826650     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.826650     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.827639     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.909551     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.909551     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.909551     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.916660     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:26:31.955446     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:31.955446     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:31.955446     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:31.963423     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:26:32.023655     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:32.023744     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:32.024049     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:32.144336     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:32.144630     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:32.145062     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:32.214920     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:32.214920     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:32.214920     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:32.325006     196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:26:32.325006     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 12:26:32.347616     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 12:26:32.403030     196 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 12:26:32.403030     196 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 12:26:32.442066     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:32.442066     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:32.442167     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:32.459402     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:32.531896     196 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 12:26:32.532013     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 12:26:32.559880     196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:26:32.559930     196 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:26:32.677533     196 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 12:26:32.677533     196 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 12:26:32.770746     196 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 12:26:32.770746     196 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 12:26:32.795586     196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 12:26:32.795586     196 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 12:26:32.840705     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:32.840705     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:32.840898     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:32.946926     196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:26:32.946926     196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 12:26:32.946926     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 12:26:32.946926     196 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:26:32.956480     196 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:26:32.956564     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 12:26:33.062176     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:26:33.084180     196 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 12:26:33.084180     196 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 12:26:33.220718     196 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 12:26:33.220781     196 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 12:26:33.300320     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:26:33.317545     196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:26:33.317545     196 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 12:26:33.344323     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:26:33.447107     196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 12:26:33.447107     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 12:26:33.590130     196 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:26:33.590130     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 12:26:33.611026     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:33.611026     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:33.611631     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:33.692570     196 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 12:26:33.692693     196 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 12:26:33.772518     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:26:33.777478     196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 12:26:33.777478     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 12:26:33.826506     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0603 12:26:33.946442     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:26:34.051200     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:34.051881     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:34.052078     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:34.116024     196 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 12:26:34.116115     196 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 12:26:34.186193     196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 12:26:34.186193     196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 12:26:34.308125     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:34.309124     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:34.309124     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:34.478194     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:26:34.500715     196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 12:26:34.500815     196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 12:26:34.610048     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 12:26:34.610048     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 12:26:34.955904     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:35.200878     196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 12:26:35.200878     196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 12:26:35.248354     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 12:26:35.248354     196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 12:26:35.273243     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:26:35.408639     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:35.408639     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:35.408879     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:35.462942     196 pod_ready.go:97] pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.22.146.54 HostIPs:[{IP:172.22.146.
54}] PodIP: PodIPs:[] StartTime:2024-06-03 12:26:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 12:26:24 +0000 UTC,FinishedAt:2024-06-03 12:26:34 +0000 UTC,ContainerID:docker://b30daf7678a90eba161f1f2526a881ef90c9940af836f920983f3484cd3554c1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://b30daf7678a90eba161f1f2526a881ef90c9940af836f920983f3484cd3554c1 Started:0xc00197a950 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 12:26:35.463000     196 pod_ready.go:81] duration metric: took 22.0240173s for pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace to be "Ready" ...
	E0603 12:26:35.463000     196 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-k2qlf" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-06-03 12:26:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.22.14
6.54 HostIPs:[{IP:172.22.146.54}] PodIP: PodIPs:[] StartTime:2024-06-03 12:26:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-06-03 12:26:24 +0000 UTC,FinishedAt:2024-06-03 12:26:34 +0000 UTC,ContainerID:docker://b30daf7678a90eba161f1f2526a881ef90c9940af836f920983f3484cd3554c1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://b30daf7678a90eba161f1f2526a881ef90c9940af836f920983f3484cd3554c1 Started:0xc00197a950 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0603 12:26:35.463056     196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace to be "Ready" ...
	I0603 12:26:35.580245     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 12:26:35.580294     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 12:26:35.594576     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:26:35.698795     196 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 12:26:35.698795     196 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 12:26:35.937621     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 12:26:35.937621     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 12:26:35.986355     196 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:26:35.986398     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 12:26:36.072509     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:26:36.326570     196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 12:26:36.575555     196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:26:36.575555     196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 12:26:36.806198     196 addons.go:234] Setting addon gcp-auth=true in "addons-975100"
	I0603 12:26:36.806433     196 host.go:66] Checking if "addons-975100" exists ...
	I0603 12:26:36.807720     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:37.194420     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:26:37.480717     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:39.436528     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:39.436528     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:39.449503     196 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 12:26:39.449503     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-975100 ).state
	I0603 12:26:39.549223     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:41.634816     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:41.917496     196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:26:41.918525     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:41.918525     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-975100 ).networkadapters[0]).ipaddresses[0]
	I0603 12:26:42.442250     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.7395516s)
	W0603 12:26:42.442332     196 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:26:42.442405     196 retry.go:31] will retry after 240.019314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:26:42.703281     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:26:44.008445     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:44.652391     196 main.go:141] libmachine: [stdout =====>] : 172.22.146.54
	
	I0603 12:26:44.652391     196 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:26:44.652391     196 sshutil.go:53] new ssh client: &{IP:172.22.146.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-975100\id_rsa Username:docker}
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (14.4593187s)
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.0751287s)
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.5060814s)
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.3605747s)
	I0603 12:26:46.422863     196 addons.go:475] Verifying addon ingress=true in "addons-975100"
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.1224325s)
	I0603 12:26:46.425795     196 out.go:177] * Verifying ingress addon...
	I0603 12:26:46.422863     196 addons.go:475] Verifying addon metrics-server=true in "addons-975100"
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.0784299s)
	I0603 12:26:46.422863     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (12.650239s)
	I0603 12:26:46.429384     196 addons.go:475] Verifying addon registry=true in "addons-975100"
	I0603 12:26:46.432699     196 out.go:177] * Verifying registry addon...
	I0603 12:26:46.431707     196 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 12:26:46.436694     196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 12:26:46.451713     196 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 12:26:46.451713     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:46.453717     196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 12:26:46.453717     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:46.488711     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:46.969511     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:46.969647     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:47.454664     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:47.455942     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:47.958174     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:47.959339     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:48.468543     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:48.503463     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:48.556905     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:48.951224     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:48.960788     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:49.587014     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:49.587063     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:49.965492     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:49.965749     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:50.494609     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:50.504973     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:50.669298     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:50.956248     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:51.000510     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:51.067252     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (17.2406006s)
	I0603 12:26:51.067411     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (17.1208251s)
	I0603 12:26:51.067411     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (16.5890771s)
	I0603 12:26:51.070011     196 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-975100 service yakd-dashboard -n yakd-dashboard
	
	I0603 12:26:51.067545     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (15.7941693s)
	I0603 12:26:51.067688     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (15.4729814s)
	I0603 12:26:51.067818     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (14.9951368s)
	W0603 12:26:51.107124     196 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0603 12:26:51.456713     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:51.460675     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:51.838143     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.6436002s)
	I0603 12:26:51.838346     196 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-975100"
	I0603 12:26:51.838346     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.134989s)
	I0603 12:26:51.844831     196 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 12:26:51.838549     196 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (12.3889411s)
	I0603 12:26:51.854116     196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:26:51.852084     196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 12:26:51.860201     196 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 12:26:51.864202     196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 12:26:51.864202     196 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 12:26:51.875864     196 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 12:26:51.876170     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:51.928768     196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 12:26:51.928768     196 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 12:26:51.961887     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:51.962205     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:51.986058     196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:26:51.986058     196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 12:26:52.046365     196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:26:52.378359     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:52.438376     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:52.443381     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:52.872685     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:52.967362     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:52.969415     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:52.978357     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:53.317392     196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.2710162s)
	I0603 12:26:53.324918     196 addons.go:475] Verifying addon gcp-auth=true in "addons-975100"
	I0603 12:26:53.328806     196 out.go:177] * Verifying gcp-auth addon...
	I0603 12:26:53.332808     196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 12:26:53.351809     196 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 12:26:53.395701     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:53.441196     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:53.451816     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:53.876973     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:53.955538     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:53.956576     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:54.366237     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:54.449714     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:54.456612     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:54.871172     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:54.950985     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:54.952049     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:54.982085     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:55.376133     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:55.453258     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:55.453867     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:55.880044     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:55.943108     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:55.945202     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:56.375896     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:56.438400     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:56.442812     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:56.868165     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:56.946421     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:56.950777     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:57.373822     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:57.451576     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:57.452154     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:57.485556     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:57.864963     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:57.946873     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:57.956291     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:58.372775     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:58.455550     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:58.457013     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:58.877144     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:58.940416     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:58.947090     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:59.367068     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:59.447539     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:59.449404     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:59.498448     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:59.880258     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:59.938106     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:59.947136     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:00.642603     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:00.642658     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:00.642691     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:01.057891     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:01.060919     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:01.062750     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:01.367648     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:01.449566     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:01.449566     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:01.867896     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:01.945843     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:01.950727     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:01.975058     196 pod_ready.go:102] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:02.373202     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:02.454919     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:02.456420     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:02.484027     196 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.484027     196 pod_ready.go:81] duration metric: took 27.0207437s for pod "coredns-7db6d8ff4d-x92m2" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.484027     196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.494578     196 pod_ready.go:92] pod "etcd-addons-975100" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.494637     196 pod_ready.go:81] duration metric: took 10.5508ms for pod "etcd-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.494637     196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.504776     196 pod_ready.go:92] pod "kube-apiserver-addons-975100" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.504776     196 pod_ready.go:81] duration metric: took 10.1387ms for pod "kube-apiserver-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.504776     196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.513110     196 pod_ready.go:92] pod "kube-controller-manager-addons-975100" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.513167     196 pod_ready.go:81] duration metric: took 8.3348ms for pod "kube-controller-manager-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.513167     196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-whw2f" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.524483     196 pod_ready.go:92] pod "kube-proxy-whw2f" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.524483     196 pod_ready.go:81] duration metric: took 11.3158ms for pod "kube-proxy-whw2f" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.524483     196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.876480     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:02.880465     196 pod_ready.go:92] pod "kube-scheduler-addons-975100" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:02.880465     196 pod_ready.go:81] duration metric: took 355.9798ms for pod "kube-scheduler-addons-975100" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:02.880465     196 pod_ready.go:38] duration metric: took 49.5222687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:27:02.880465     196 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:27:02.892491     196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:27:02.926769     196 api_server.go:72] duration metric: took 52.75493s to wait for apiserver process to appear ...
	I0603 12:27:02.926769     196 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:27:02.926769     196 api_server.go:253] Checking apiserver healthz at https://172.22.146.54:8443/healthz ...
	I0603 12:27:02.934207     196 api_server.go:279] https://172.22.146.54:8443/healthz returned 200:
	ok
	I0603 12:27:02.936591     196 api_server.go:141] control plane version: v1.30.1
	I0603 12:27:02.936658     196 api_server.go:131] duration metric: took 9.8897ms to wait for apiserver health ...
	I0603 12:27:02.936658     196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:27:02.939845     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:02.947842     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:03.093251     196 system_pods.go:59] 18 kube-system pods found
	I0603 12:27:03.093251     196 system_pods.go:61] "coredns-7db6d8ff4d-x92m2" [20173c2f-bef6-436d-98be-b94e4ac03be3] Running
	I0603 12:27:03.093251     196 system_pods.go:61] "csi-hostpath-attacher-0" [8a029703-fad4-4880-9ff3-f580e7373239] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0603 12:27:03.093251     196 system_pods.go:61] "csi-hostpath-resizer-0" [8ecf3986-14fc-45d9-bbcd-146eb2286f38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0603 12:27:03.093251     196 system_pods.go:61] "csi-hostpathplugin-6gcgw" [d627e709-3791-408f-b7b5-6e90c1f05b4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0603 12:27:03.093251     196 system_pods.go:61] "etcd-addons-975100" [2976657d-d847-4b4c-8edf-520f7a79a8e8] Running
	I0603 12:27:03.093251     196 system_pods.go:61] "kube-apiserver-addons-975100" [93cd9f1e-b136-4fc6-81e3-61d9808c2f2a] Running
	I0603 12:27:03.093251     196 system_pods.go:61] "kube-controller-manager-addons-975100" [0e164c36-0b91-4439-86ce-e952c3d25662] Running
	I0603 12:27:03.093251     196 system_pods.go:61] "kube-ingress-dns-minikube" [d6d3e47e-c29e-4cc4-86ed-ee82dbc97d99] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0603 12:27:03.093251     196 system_pods.go:61] "kube-proxy-whw2f" [0c9586b4-4c39-4e30-87bd-7d4fd1f6bcff] Running
	I0603 12:27:03.093251     196 system_pods.go:61] "kube-scheduler-addons-975100" [24f34458-b302-4f70-9bb0-a50783171939] Running
	I0603 12:27:03.093785     196 system_pods.go:61] "metrics-server-c59844bb4-jhc6h" [4187f9ec-f978-4643-838b-d3875d916087] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:27:03.093823     196 system_pods.go:61] "nvidia-device-plugin-daemonset-7kz8w" [8712d628-4348-427e-9373-ce7d8f1b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0603 12:27:03.093823     196 system_pods.go:61] "registry-4mwfz" [04ed4d5a-632f-444a-b01c-23b8e51aaa10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0603 12:27:03.093823     196 system_pods.go:61] "registry-proxy-v26mc" [77d080cc-0158-445b-ac0f-a5c067638727] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0603 12:27:03.093921     196 system_pods.go:61] "snapshot-controller-745499f584-b5wtk" [647f93d4-b8bb-463e-a544-9c18445e1b8d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 12:27:03.093948     196 system_pods.go:61] "snapshot-controller-745499f584-brj7g" [469e9829-7682-4268-bcd7-3ca312fb086e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 12:27:03.094007     196 system_pods.go:61] "storage-provisioner" [c2bbbbd4-1456-496b-8cf7-2e8aae82b9cf] Running
	I0603 12:27:03.094007     196 system_pods.go:61] "tiller-deploy-6677d64bcd-zvpk6" [6a235ac0-d0ab-42fb-9971-c2242086334b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0603 12:27:03.094007     196 system_pods.go:74] duration metric: took 157.2892ms to wait for pod list to return data ...
	I0603 12:27:03.094007     196 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:27:03.286820     196 default_sa.go:45] found service account: "default"
	I0603 12:27:03.286995     196 default_sa.go:55] duration metric: took 192.9864ms for default service account to be created ...
	I0603 12:27:03.287076     196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:27:03.369220     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:03.448973     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:03.449957     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:03.501500     196 system_pods.go:86] 18 kube-system pods found
	I0603 12:27:03.501500     196 system_pods.go:89] "coredns-7db6d8ff4d-x92m2" [20173c2f-bef6-436d-98be-b94e4ac03be3] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "csi-hostpath-attacher-0" [8a029703-fad4-4880-9ff3-f580e7373239] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0603 12:27:03.501500     196 system_pods.go:89] "csi-hostpath-resizer-0" [8ecf3986-14fc-45d9-bbcd-146eb2286f38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0603 12:27:03.501500     196 system_pods.go:89] "csi-hostpathplugin-6gcgw" [d627e709-3791-408f-b7b5-6e90c1f05b4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0603 12:27:03.501500     196 system_pods.go:89] "etcd-addons-975100" [2976657d-d847-4b4c-8edf-520f7a79a8e8] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "kube-apiserver-addons-975100" [93cd9f1e-b136-4fc6-81e3-61d9808c2f2a] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "kube-controller-manager-addons-975100" [0e164c36-0b91-4439-86ce-e952c3d25662] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "kube-ingress-dns-minikube" [d6d3e47e-c29e-4cc4-86ed-ee82dbc97d99] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0603 12:27:03.501500     196 system_pods.go:89] "kube-proxy-whw2f" [0c9586b4-4c39-4e30-87bd-7d4fd1f6bcff] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "kube-scheduler-addons-975100" [24f34458-b302-4f70-9bb0-a50783171939] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "metrics-server-c59844bb4-jhc6h" [4187f9ec-f978-4643-838b-d3875d916087] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 12:27:03.501500     196 system_pods.go:89] "nvidia-device-plugin-daemonset-7kz8w" [8712d628-4348-427e-9373-ce7d8f1b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0603 12:27:03.501500     196 system_pods.go:89] "registry-4mwfz" [04ed4d5a-632f-444a-b01c-23b8e51aaa10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0603 12:27:03.501500     196 system_pods.go:89] "registry-proxy-v26mc" [77d080cc-0158-445b-ac0f-a5c067638727] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0603 12:27:03.501500     196 system_pods.go:89] "snapshot-controller-745499f584-b5wtk" [647f93d4-b8bb-463e-a544-9c18445e1b8d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 12:27:03.501500     196 system_pods.go:89] "snapshot-controller-745499f584-brj7g" [469e9829-7682-4268-bcd7-3ca312fb086e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0603 12:27:03.501500     196 system_pods.go:89] "storage-provisioner" [c2bbbbd4-1456-496b-8cf7-2e8aae82b9cf] Running
	I0603 12:27:03.501500     196 system_pods.go:89] "tiller-deploy-6677d64bcd-zvpk6" [6a235ac0-d0ab-42fb-9971-c2242086334b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0603 12:27:03.501500     196 system_pods.go:126] duration metric: took 214.4227ms to wait for k8s-apps to be running ...
	I0603 12:27:03.502524     196 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:27:03.513528     196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:27:03.544707     196 system_svc.go:56] duration metric: took 42.1825ms WaitForService to wait for kubelet
	I0603 12:27:03.544768     196 kubeadm.go:576] duration metric: took 53.3729246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:27:03.544831     196 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:27:04.138240     196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:27:04.138240     196 node_conditions.go:123] node cpu capacity is 2
	I0603 12:27:04.138240     196 node_conditions.go:105] duration metric: took 593.4046ms to run NodePressure ...
	I0603 12:27:04.138240     196 start.go:240] waiting for startup goroutines ...
	I0603 12:27:04.146973     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:04.147152     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:04.150327     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:04.468786     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:04.469381     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:04.469527     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:04.867653     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:04.946315     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:04.948306     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:05.395981     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:05.651651     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:05.654991     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:05.874212     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:05.961822     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:05.964781     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:06.390690     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:06.484345     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:06.489410     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:06.870126     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:06.968769     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:06.999758     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:07.382149     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:07.454795     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:07.456809     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:07.869666     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:07.946315     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:07.953326     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:08.368369     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:08.449131     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:08.450163     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:08.877379     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:08.957020     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:08.961447     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:09.370651     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:09.453700     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:09.454753     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:09.873266     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:09.967063     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:09.967290     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:10.380255     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:10.441417     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:10.448093     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:10.871545     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:10.956008     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:10.956844     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:11.378161     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:11.439300     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:11.442747     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:11.871152     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:11.950698     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:11.950963     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:12.378743     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:12.859655     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:12.860543     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:12.872831     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:12.941288     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:12.945783     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:13.378624     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:13.443805     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:13.449553     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:13.907829     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:13.944441     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:13.947442     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:14.374836     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:14.460601     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:14.469182     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:14.865267     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:14.946351     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:14.948149     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:15.376067     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:15.453131     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:15.453131     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:15.871149     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:15.952056     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:15.952963     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:16.409826     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:16.455216     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:16.456720     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:16.868608     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:16.946210     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:16.950511     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:17.367453     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:17.452622     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:17.457002     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:17.872700     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:17.952642     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:17.955050     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:18.385477     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:18.463955     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:18.484747     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:18.910513     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:18.961224     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:18.963636     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:19.373278     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:19.453291     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:19.453291     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:19.865549     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:19.941141     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:19.946131     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:20.372872     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:20.453267     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:20.453267     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:20.876880     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:20.954437     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:20.957915     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:21.375745     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:21.453189     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:21.456210     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:21.874591     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:21.947090     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:21.947145     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:22.377305     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:22.438309     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:22.442717     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:22.874805     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:22.957652     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:22.957652     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:23.365033     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:23.440663     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:23.445407     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:23.871288     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:23.950341     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:23.953904     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:24.365954     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:24.446472     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:24.450414     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:24.872401     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:24.955767     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:24.956545     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:25.377620     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:25.440006     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:25.444440     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:25.871640     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:25.957848     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:25.957848     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:26.385844     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:26.462250     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:26.462250     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:26.870505     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:26.952989     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:26.957341     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:27.374830     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:27.450568     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:27.452083     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:27.880276     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:27.941387     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:27.945086     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:28.374023     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:28.450132     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:28.454621     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:28.876459     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:28.955676     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:28.956414     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:29.652007     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:29.652611     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:29.653222     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:30.067264     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:30.068487     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:30.071491     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:30.806149     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:30.806372     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:30.809128     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:30.878576     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:30.956047     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:30.956047     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:31.375184     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:31.456017     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:27:31.461957     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:31.868978     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:31.962444     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:31.963444     196 kapi.go:107] duration metric: took 45.5263668s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 12:27:32.370719     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:32.450024     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:32.879158     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:32.946178     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:33.371796     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:33.448992     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:33.878721     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:33.955861     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:34.371196     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:34.448062     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:34.879124     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:34.939734     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:35.370299     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:35.448118     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:35.877525     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:35.939254     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:36.368257     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:36.446725     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:36.877632     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:36.939906     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:37.371747     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:37.453191     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:37.868493     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:37.945087     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:38.381034     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:38.441690     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:38.868678     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:38.948078     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:39.377633     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:39.443051     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:39.869753     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:39.945155     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:40.381893     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:40.441321     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:40.871856     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:40.949067     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:41.377693     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:41.454669     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:41.870371     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:41.948048     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:42.378951     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:42.438291     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:42.869219     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:42.953411     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:43.376979     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:43.452572     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:43.866826     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:43.944050     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:44.376829     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:44.452010     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:44.874597     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:44.955956     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:45.378636     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:45.440837     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:45.871234     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:45.948475     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:46.378699     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:46.440989     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:46.871179     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:46.947377     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:47.377645     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:47.453963     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:47.866158     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:47.944665     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:48.376899     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:48.453258     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:48.881674     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:48.951221     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:49.470327     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:49.470327     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:49.878868     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:49.941857     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:50.617841     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:50.625496     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:51.070246     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:51.071302     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:51.376443     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:51.452283     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:51.881322     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:51.958601     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:52.367201     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:52.445262     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:52.875995     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:52.954700     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:53.367249     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:53.444748     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:53.876609     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:54.190595     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:54.457867     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:54.460960     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:54.866056     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:54.945257     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:55.370242     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:55.456552     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:55.871473     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:55.947553     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:56.378618     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:56.440589     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:56.871493     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:56.948605     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:57.372512     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:57.454842     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:57.866501     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:57.943939     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:58.374236     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:58.450540     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:58.880247     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:58.942990     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:59.375176     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:59.452613     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:27:59.943803     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:27:59.950469     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:00.372024     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:00.447176     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:00.876874     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:00.964374     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:01.380765     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:01.459498     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:01.888033     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:01.965539     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:02.371292     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:02.446573     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:02.925226     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:02.967223     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:03.382214     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:03.447772     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:03.876958     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:03.951947     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:04.366911     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:04.444920     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:04.874585     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:04.953254     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:05.365413     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:05.445285     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:05.878684     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:05.938662     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:06.369768     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:06.448207     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:06.990722     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:06.992975     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:07.372846     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:07.450452     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:07.878961     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:07.940407     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:08.368696     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:08.446085     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:08.867219     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:08.944706     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:09.549143     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:09.558214     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:09.878160     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:09.946793     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:10.387844     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:10.454459     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:10.916660     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:10.951008     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:11.376210     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:11.451856     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:11.880328     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:11.939147     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:12.368745     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:12.444671     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:12.876745     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:12.952656     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:13.364623     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:13.443558     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:13.872908     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:13.950404     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:14.377533     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:14.441558     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:14.866235     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:14.942154     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:15.372467     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:15.450719     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:15.878172     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:15.940935     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:16.854869     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:16.862057     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:16.868457     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:16.944143     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:17.799366     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:17.803603     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:17.882856     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:17.945316     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:18.376053     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:18.441424     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:18.869125     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:18.951845     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:19.373246     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:19.451830     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:19.864987     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:19.942868     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:20.383250     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:20.457058     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:20.866320     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:20.941509     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:21.372406     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:21.661812     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:21.881164     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:21.956739     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:22.365487     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:22.454294     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:22.872661     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:22.946809     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:23.382073     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:23.440299     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:23.870679     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:23.948758     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:24.509178     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:24.509339     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:24.873244     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:24.954642     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:25.376977     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:25.438918     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:25.871221     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:25.949874     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:26.379380     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:26.443363     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:26.875033     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:26.954539     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:27.377975     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:27.439575     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:27.870859     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:27.948296     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:28.378202     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:28.441944     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:29.004883     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:29.006210     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:29.377683     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:29.441187     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:29.890195     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:29.953519     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:30.380292     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:30.448355     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:30.871059     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:30.949066     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:31.380943     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:31.440457     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:31.879566     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:31.946066     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:32.376185     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:32.463127     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:32.880761     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:32.942387     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:33.373411     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:33.449191     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:33.880518     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:33.952067     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:34.397323     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:34.444929     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:34.866015     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:34.945712     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:35.375518     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:35.459030     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:35.866784     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:35.945385     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:36.370172     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:36.445573     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:36.873730     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:36.950685     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:37.379448     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:37.485448     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:37.871345     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:37.952431     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:38.379296     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:38.440386     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:39.277259     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:39.277259     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:39.376621     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:39.448421     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:39.878003     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:39.953292     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:40.422706     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:40.440715     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:40.869475     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:40.949005     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:41.377648     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:41.440621     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:41.869785     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:41.948471     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:42.501311     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:42.501311     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:42.877739     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:42.942313     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:43.368643     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:43.447775     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:43.873785     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:43.959728     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:44.379146     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:44.441580     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:44.869076     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:44.949086     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:45.378014     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:45.454447     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:45.902742     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:45.954732     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:46.372759     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:46.447374     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:46.869032     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:46.953781     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:47.376376     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:47.455179     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:47.867604     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:47.945442     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:48.379788     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:48.451386     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:48.994371     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:48.999235     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:49.375679     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:49.449430     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:49.875372     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:49.953913     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:50.377313     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:50.451792     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:50.867993     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:50.944114     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:51.373015     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:51.450010     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:51.866021     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:51.951711     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:52.376979     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:52.453433     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:52.910828     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:52.944564     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:53.365006     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:53.444029     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:53.875708     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:53.964347     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:54.380646     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:54.440221     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:54.871942     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:54.948568     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:55.379269     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:55.453338     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:55.872606     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:55.946893     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:56.386030     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:56.458168     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:56.871260     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:56.943768     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:57.370072     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:57.449557     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:57.876845     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:57.953439     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:58.366096     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:58.443679     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:58.878594     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:58.954541     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:59.401518     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:59.500193     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:28:59.870976     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:28:59.950306     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:00.379734     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:00.453742     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:00.875972     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:00.943547     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:01.373487     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:01.452195     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:01.881101     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:01.954278     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:02.366467     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:02.445192     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:02.874022     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:02.952568     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:03.378371     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:03.454930     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:03.870269     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:03.943500     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:04.374166     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:04.452003     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:04.891637     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:04.946359     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:05.377612     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:05.455792     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:05.869806     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:05.944139     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:06.375924     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:06.455093     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:06.870185     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:06.944784     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:07.374439     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:07.452458     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:07.868093     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:07.945970     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:08.371731     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:08.450368     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:08.865429     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:08.943082     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:09.373624     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:09.453075     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:09.871282     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:09.953805     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:10.378121     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:10.441363     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:10.878541     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:10.951012     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:11.459599     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:11.461608     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:11.875524     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:11.950534     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:12.549517     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:12.662234     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:12.867406     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:12.943553     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:13.384955     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:13.452257     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:13.866742     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:13.947408     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:14.374490     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:14.451314     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:14.865617     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:14.944906     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:15.378297     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:15.454675     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:15.879999     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:15.955315     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:16.371192     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:16.451902     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:16.879278     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:16.941521     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:17.375682     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:17.441694     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:17.883163     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:17.943498     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:18.369583     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:18.448052     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:18.881101     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:18.943742     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:19.372509     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:19.448849     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:19.881275     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:20.202897     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:20.761782     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:20.764163     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:20.878355     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:20.954333     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:21.365484     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:21.444828     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:21.868764     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:21.942853     196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:29:22.377322     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:22.454826     196 kapi.go:107] duration metric: took 2m36.0218084s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 12:29:22.877116     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:23.380236     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:23.942474     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:24.370036     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:24.876879     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:25.508419     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:25.878446     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:26.373573     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:29:26.879892     196 kapi.go:107] duration metric: took 2m35.0265057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 12:29:37.349536     196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 12:29:37.349536     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:37.851163     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:38.347879     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:38.849239     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:39.352390     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:39.854869     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:40.354484     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:40.856026     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:41.347981     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:41.854018     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:42.353133     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:42.854471     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:43.354688     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:43.853150     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:44.353440     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:44.840092     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:45.355445     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:45.851891     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:46.350514     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:46.854563     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:47.341876     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:47.840956     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:48.343980     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:48.843914     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:49.343833     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:49.848512     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:50.349258     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:50.848958     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:51.350294     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:51.849698     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:52.352355     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:52.850694     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:53.353994     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:53.854716     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:54.340129     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:54.847151     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:55.345503     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:55.849167     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:56.355884     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:56.849471     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:57.356291     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:57.851078     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:58.355267     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:58.861504     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:59.353757     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:29:59.846795     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:00.351942     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:00.840217     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:01.347012     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:01.847341     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:02.349713     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:02.847526     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:03.348099     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:03.848032     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:04.351899     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:04.852636     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:05.351700     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:05.853463     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:06.342617     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:06.846977     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:07.346421     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:07.847366     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:08.349456     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:08.849262     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:09.352329     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:09.844888     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:10.349933     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:10.863166     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:11.353286     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:11.853771     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:12.349252     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:12.840929     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:13.350107     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:13.856646     196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:30:14.346135     196 kapi.go:107] duration metric: took 3m21.0116383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 12:30:14.349315     196 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-975100 cluster.
	I0603 12:30:14.353617     196 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 12:30:14.359199     196 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 12:30:14.364039     196 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, helm-tiller, metrics-server, volcano, ingress-dns, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0603 12:30:14.370067     196 addons.go:510] duration metric: took 4m4.19662s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner helm-tiller metrics-server volcano ingress-dns inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0603 12:30:14.370067     196 start.go:245] waiting for cluster config update ...
	I0603 12:30:14.370514     196 start.go:254] writing updated cluster config ...
	I0603 12:30:14.382799     196 ssh_runner.go:195] Run: rm -f paused
	I0603 12:30:14.642214     196 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:30:14.646577     196 out.go:177] * Done! kubectl is now configured to use "addons-975100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 12:31:03 addons-975100 dockerd[1329]: time="2024-06-03T12:31:03.095297230Z" level=warning msg="cleaning up after shim disconnected" id=7a328f3b6b043b080493e1cfddb234d2eb6df70b40027d6fcd910500c6659b8f namespace=moby
	Jun 03 12:31:03 addons-975100 dockerd[1329]: time="2024-06-03T12:31:03.095323030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:31:03 addons-975100 dockerd[1323]: time="2024-06-03T12:31:03.094851029Z" level=info msg="ignoring event" container=7a328f3b6b043b080493e1cfddb234d2eb6df70b40027d6fcd910500c6659b8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:31:04 addons-975100 dockerd[1323]: time="2024-06-03T12:31:04.028134916Z" level=info msg="ignoring event" container=9d013a3049b009ee691e8ac3e10279669930adbbc73733b76fc2537b7fb7dbe3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:31:04 addons-975100 dockerd[1329]: time="2024-06-03T12:31:04.028433617Z" level=info msg="shim disconnected" id=9d013a3049b009ee691e8ac3e10279669930adbbc73733b76fc2537b7fb7dbe3 namespace=moby
	Jun 03 12:31:04 addons-975100 dockerd[1329]: time="2024-06-03T12:31:04.028522517Z" level=warning msg="cleaning up after shim disconnected" id=9d013a3049b009ee691e8ac3e10279669930adbbc73733b76fc2537b7fb7dbe3 namespace=moby
	Jun 03 12:31:04 addons-975100 dockerd[1329]: time="2024-06-03T12:31:04.028540217Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:31:05 addons-975100 dockerd[1329]: time="2024-06-03T12:31:05.833449605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:31:05 addons-975100 dockerd[1329]: time="2024-06-03T12:31:05.833559605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:31:05 addons-975100 dockerd[1329]: time="2024-06-03T12:31:05.833581905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:31:05 addons-975100 dockerd[1329]: time="2024-06-03T12:31:05.837613320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:31:06 addons-975100 cri-dockerd[1229]: time="2024-06-03T12:31:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 12:31:07 addons-975100 cri-dockerd[1229]: time="2024-06-03T12:31:07Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.223467068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.223765868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.223848269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.224074369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:31:08 addons-975100 dockerd[1323]: time="2024-06-03T12:31:08.378961201Z" level=info msg="ignoring event" container=ea9008fe8ddebfa56402d53fa1d06c826bed96d90d669c49e9a69ac72da75f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.379833004Z" level=info msg="shim disconnected" id=ea9008fe8ddebfa56402d53fa1d06c826bed96d90d669c49e9a69ac72da75f30 namespace=moby
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.380152004Z" level=warning msg="cleaning up after shim disconnected" id=ea9008fe8ddebfa56402d53fa1d06c826bed96d90d669c49e9a69ac72da75f30 namespace=moby
	Jun 03 12:31:08 addons-975100 dockerd[1329]: time="2024-06-03T12:31:08.380206105Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:31:10 addons-975100 dockerd[1323]: time="2024-06-03T12:31:10.656715729Z" level=info msg="ignoring event" container=4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:31:10 addons-975100 dockerd[1329]: time="2024-06-03T12:31:10.658432533Z" level=info msg="shim disconnected" id=4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614 namespace=moby
	Jun 03 12:31:10 addons-975100 dockerd[1329]: time="2024-06-03T12:31:10.658723734Z" level=warning msg="cleaning up after shim disconnected" id=4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614 namespace=moby
	Jun 03 12:31:10 addons-975100 dockerd[1329]: time="2024-06-03T12:31:10.658744834Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	ea9008fe8ddeb       busybox@sha256:5eef5ed34e1e1ff0a4ae850395cbf665c4de6b4b83a32a0bc7bcb998e24e7bbb                                                              5 seconds ago        Exited              busybox                                  0                   4a31442d52c0e       test-local-path
	7a328f3b6b043       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:40402d51273ea7d281392557096333b5f62316a684f9bc9252214243840f757e                            11 seconds ago       Exited              gadget                                   4                   d3e295b142be4       gadget-cgfvw
	e09092feeb81b       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              12 seconds ago       Exited              helper-pod                               0                   9d013a3049b00       helper-pod-create-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755
	14efda6317751       nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d                                                                14 seconds ago       Running             nginx                                    0                   61f4d6ef22dae       test-job-nginx-0
	6742bbe06e5e5       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                                        29 seconds ago       Running             headlamp                                 0                   7fd1d4d29328c       headlamp-68456f997b-dr562
	5218a97a3cd0c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   46577af2b0d7b       gcp-auth-5db96cd9b4-hh6pb
	4f1cc76c76496       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	0a4075c6d6a02       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   ac0b6125efa13       ingress-nginx-controller-768f948f8f-km8sg
	827ddfb1cfc59       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   32c78d7767064       volcano-admission-7b497cf95b-jldgv
	9c9d91d617208       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          2 minutes ago        Running             csi-provisioner                          0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	4242e3c375bce       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	e6ad5a52b715b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	398bc30d762a8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	fa4122ffb4b22       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   d4a86412e16b2       csi-hostpath-resizer-0
	88f1d3df3cd2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   6abdf203602fb       csi-hostpathplugin-6gcgw
	718b5e9b66fbf       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   6893d951d0928       csi-hostpath-attacher-0
	d55885ff60a73       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   f42959980e0c3       volcano-scheduler-765f888978-c4zsh
	a092e24f175e9       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   5b2f17ae4cc55       volcano-controller-86c5446455-z9tvf
	49b4e47f7b09f       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   8e749eff4c631       ingress-nginx-admission-patch-clssn
	9add1a829040f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   7769aaca12eb3       ingress-nginx-admission-create-pjjnn
	c1739fa489b8e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   6c0d49f2f7cf7       local-path-provisioner-8d985888d-ntqff
	4f4b41646486f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   f4ba72a2e76d0       snapshot-controller-745499f584-b5wtk
	61debf417720f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   b55ce266f279c       snapshot-controller-745499f584-brj7g
	c4d22481c8766       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        3 minutes ago        Running             yakd                                     0                   31d500e7702af       yakd-dashboard-5ddbf7d777-c4zgs
	4fce447138fa0       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   c661f75d4e6a2       metrics-server-c59844bb4-jhc6h
	7fa73329c8f54       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   c6d94620e5ae4       tiller-deploy-6677d64bcd-zvpk6
	3c70b777353b4       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   16ee1bcaf3df9       kube-ingress-dns-minikube
	8edd2d61a6e52       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               4 minutes ago        Running             cloud-spanner-emulator                   0                   68af7136c7295       cloud-spanner-emulator-6fcd4f6f98-w9sb8
	aacbc196bde93       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   a5a43de06a90c       storage-provisioner
	cb8eec454d123       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   95311a03fad3c       coredns-7db6d8ff4d-x92m2
	397b9be37ef1d       747097150317f                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   840bf69b31498       kube-proxy-whw2f
	8937eb0f2a3a7       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   4cb67286119c5       kube-apiserver-addons-975100
	a7940ba745bf1       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   662f13833c665       kube-controller-manager-addons-975100
	e25f1ac791147       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   f54b357b17735       kube-scheduler-addons-975100
	1263ebd768922       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   a72d5423aa388       etcd-addons-975100
	
	
	==> controller_ingress [0a4075c6d6a0] <==
	W0603 12:29:21.549564       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0603 12:29:21.549892       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0603 12:29:21.558292       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.1" state="clean" commit="6911225c3f747e1cd9d109c305436d08b668f086" platform="linux/amd64"
	I0603 12:29:22.306709       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0603 12:29:22.369715       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0603 12:29:22.402704       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0603 12:29:22.451636       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0746c131-acd3-4921-848e-196d3fe6d50d", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0603 12:29:22.456342       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"516cadde-bac8-4937-a924-e7c3a883b75f", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0603 12:29:22.456407       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f7789299-ed5a-49fb-b1fa-2964c3b2f427", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0603 12:29:23.605569       7 nginx.go:307] "Starting NGINX process"
	I0603 12:29:23.606352       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0603 12:29:23.608335       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0603 12:29:23.607660       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0603 12:29:23.632323       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0603 12:29:23.632544       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-km8sg"
	I0603 12:29:23.642219       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-km8sg" node="addons-975100"
	I0603 12:29:23.678838       7 controller.go:210] "Backend successfully reloaded"
	I0603 12:29:23.678982       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0603 12:29:23.679156       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-km8sg", UID:"8fa34934-9d7f-4065-a3ac-c49eb77d624a", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [cb8eec454d12] <==
	[INFO] 10.244.0.7:45481 - 40073 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000197s
	[INFO] 10.244.0.7:60042 - 37417 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077899s
	[INFO] 10.244.0.7:60042 - 14379 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070899s
	[INFO] 10.244.0.7:42410 - 55640 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001772s
	[INFO] 10.244.0.7:42410 - 55130 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001156s
	[INFO] 10.244.0.7:47086 - 34412 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001139s
	[INFO] 10.244.0.7:47086 - 13679 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000784s
	[INFO] 10.244.0.7:60412 - 8612 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000789s
	[INFO] 10.244.0.7:60412 - 24761 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000501s
	[INFO] 10.244.0.7:34714 - 17563 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000471s
	[INFO] 10.244.0.7:34714 - 16025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000631s
	[INFO] 10.244.0.7:50347 - 51535 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000468s
	[INFO] 10.244.0.7:50347 - 32593 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043299s
	[INFO] 10.244.0.7:55121 - 9148 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001226s
	[INFO] 10.244.0.7:55121 - 13758 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000508s
	[INFO] 10.244.0.26:41281 - 48971 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284401s
	[INFO] 10.244.0.26:42750 - 45526 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0000808s
	[INFO] 10.244.0.26:43294 - 36940 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000297401s
	[INFO] 10.244.0.26:48769 - 40048 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000453001s
	[INFO] 10.244.0.26:47232 - 21324 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001343s
	[INFO] 10.244.0.26:36966 - 62356 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000528501s
	[INFO] 10.244.0.26:50704 - 40251 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002017403s
	[INFO] 10.244.0.26:56436 - 24801 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002856705s
	[INFO] 10.244.0.27:47386 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000536601s
	[INFO] 10.244.0.27:41342 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001394s
	
	
	==> describe nodes <==
	Name:               addons-975100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-975100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=addons-975100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_25_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-975100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-975100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:25:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-975100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:31:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:31:03 +0000   Mon, 03 Jun 2024 12:25:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:31:03 +0000   Mon, 03 Jun 2024 12:25:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:31:03 +0000   Mon, 03 Jun 2024 12:25:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:31:03 +0000   Mon, 03 Jun 2024 12:26:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.146.54
	  Hostname:    addons-975100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 61f1327c870147248013b62f5a35dbdc
	  System UUID:                b6c1e127-b3a1-6a42-995f-9013386c6cce
	  Boot ID:                    499d5743-b2eb-47a1-b4e2-8c44bea2bcd4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-w9sb8      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  gadget                      gadget-cgfvw                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  gcp-auth                    gcp-auth-5db96cd9b4-hh6pb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  headlamp                    headlamp-68456f997b-dr562                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-km8sg    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m27s
	  kube-system                 coredns-7db6d8ff4d-x92m2                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpathplugin-6gcgw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 etcd-addons-975100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-apiserver-addons-975100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-addons-975100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-proxy-whw2f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-scheduler-addons-975100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 metrics-server-c59844bb4-jhc6h               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 snapshot-controller-745499f584-b5wtk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 snapshot-controller-745499f584-brj7g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 tiller-deploy-6677d64bcd-zvpk6               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  local-path-storage          local-path-provisioner-8d985888d-ntqff       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  my-volcano                  test-job-nginx-0                             1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  volcano-system              volcano-admission-7b497cf95b-jldgv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-controller-86c5446455-z9tvf          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-scheduler-765f888978-c4zsh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-c4zgs              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1950m (97%!)(MISSING)  1 (50%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m49s  kube-proxy       
	  Normal  Starting                 5m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s  kubelet          Node addons-975100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s  kubelet          Node addons-975100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s  kubelet          Node addons-975100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m12s  kubelet          Node addons-975100 status is now: NodeReady
	  Normal  RegisteredNode           5m4s   node-controller  Node addons-975100 event: Registered Node addons-975100 in Controller
	
	
	==> dmesg <==
	[ +11.356128] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.350473] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.099393] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.028611] kauditd_printk_skb: 91 callbacks suppressed
	[Jun 3 12:27] kauditd_printk_skb: 72 callbacks suppressed
	[ +29.001992] kauditd_printk_skb: 4 callbacks suppressed
	[ +25.646670] hrtimer: interrupt took 3629187 ns
	[  +3.827237] kauditd_printk_skb: 4 callbacks suppressed
	[Jun 3 12:28] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.545816] kauditd_printk_skb: 22 callbacks suppressed
	[  +8.136451] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.363405] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.008450] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.363495] kauditd_printk_skb: 7 callbacks suppressed
	[Jun 3 12:29] kauditd_printk_skb: 22 callbacks suppressed
	[ +16.069171] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.020313] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.900693] kauditd_printk_skb: 13 callbacks suppressed
	[Jun 3 12:30] kauditd_printk_skb: 80 callbacks suppressed
	[ +17.241013] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.410062] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.841213] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.033425] kauditd_printk_skb: 9 callbacks suppressed
	[ +14.621067] kauditd_printk_skb: 13 callbacks suppressed
	[Jun 3 12:31] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [1263ebd76892] <==
	{"level":"warn","ts":"2024-06-03T12:30:51.120013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.49825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3625"}
	{"level":"info","ts":"2024-06-03T12:30:51.120041Z","caller":"traceutil/trace.go:171","msg":"trace[408102023] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1720; }","duration":"103.55405ms","start":"2024-06-03T12:30:51.01648Z","end":"2024-06-03T12:30:51.120034Z","steps":["trace[408102023] 'agreement among raft nodes before linearized reading'  (duration: 103.34465ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.120264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.114076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:30:51.120288Z","caller":"traceutil/trace.go:171","msg":"trace[1449767015] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1720; }","duration":"156.163276ms","start":"2024-06-03T12:30:50.964118Z","end":"2024-06-03T12:30:51.120281Z","steps":["trace[1449767015] 'agreement among raft nodes before linearized reading'  (duration: 156.125776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.120547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.305373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\" ","response":"range_response_count:1 size:4204"}
	{"level":"info","ts":"2024-06-03T12:30:51.120598Z","caller":"traceutil/trace.go:171","msg":"trace[1437445043] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755; range_end:; response_count:1; response_revision:1720; }","duration":"354.378573ms","start":"2024-06-03T12:30:50.76621Z","end":"2024-06-03T12:30:51.120589Z","steps":["trace[1437445043] 'agreement among raft nodes before linearized reading'  (duration: 354.258873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.120622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:30:50.766198Z","time spent":"354.417673ms","remote":"127.0.0.1:44456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4226,"request content":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\" "}
	{"level":"info","ts":"2024-06-03T12:30:51.514435Z","caller":"traceutil/trace.go:171","msg":"trace[2130068548] transaction","detail":"{read_only:false; response_revision:1721; number_of_response:1; }","duration":"380.159084ms","start":"2024-06-03T12:30:51.134217Z","end":"2024-06-03T12:30:51.514376Z","steps":["trace[2130068548] 'process raft request'  (duration: 362.893776ms)","trace[2130068548] 'compare'  (duration: 16.463108ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T12:30:51.514597Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:30:51.134198Z","time spent":"380.320684ms","remote":"127.0.0.1:44518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1698 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-06-03T12:30:51.556309Z","caller":"traceutil/trace.go:171","msg":"trace[1623791038] linearizableReadLoop","detail":"{readStateIndex:1804; appliedIndex:1802; }","duration":"408.696697ms","start":"2024-06-03T12:30:51.147455Z","end":"2024-06-03T12:30:51.556151Z","steps":["trace[1623791038] 'read index received'  (duration: 349.665369ms)","trace[1623791038] 'applied index is now lower than readState.Index'  (duration: 59.030028ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:30:51.556429Z","caller":"traceutil/trace.go:171","msg":"trace[1495134860] transaction","detail":"{read_only:false; response_revision:1722; number_of_response:1; }","duration":"415.717401ms","start":"2024-06-03T12:30:51.140698Z","end":"2024-06-03T12:30:51.556415Z","steps":["trace[1495134860] 'process raft request'  (duration: 415.181001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.55653Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:30:51.140668Z","time spent":"415.789301ms","remote":"127.0.0.1:44436","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1704 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-06-03T12:30:51.556542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.051682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2024-06-03T12:30:51.55658Z","caller":"traceutil/trace.go:171","msg":"trace[133692223] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1722; }","duration":"170.119382ms","start":"2024-06-03T12:30:51.386433Z","end":"2024-06-03T12:30:51.55657Z","steps":["trace[133692223] 'agreement among raft nodes before linearized reading'  (duration: 170.003582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.556758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.317397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5807"}
	{"level":"info","ts":"2024-06-03T12:30:51.556781Z","caller":"traceutil/trace.go:171","msg":"trace[1875638058] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1722; }","duration":"409.342797ms","start":"2024-06-03T12:30:51.147432Z","end":"2024-06-03T12:30:51.556774Z","steps":["trace[1875638058] 'agreement among raft nodes before linearized reading'  (duration: 409.268997ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:30:51.556802Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:30:51.147305Z","time spent":"409.489597ms","remote":"127.0.0.1:44456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5829,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-06-03T12:30:51.856637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.205073ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14678233794446630184 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/my-volcano/test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b\" mod_revision:1719 > success:<request_put:<key:\"/registry/events/my-volcano/test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b\" value_size:598 lease:5454861757591853562 >> failure:<request_range:<key:\"/registry/events/my-volcano/test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T12:30:51.856832Z","caller":"traceutil/trace.go:171","msg":"trace[188974757] transaction","detail":"{read_only:false; response_revision:1723; number_of_response:1; }","duration":"288.285739ms","start":"2024-06-03T12:30:51.568416Z","end":"2024-06-03T12:30:51.856702Z","steps":["trace[188974757] 'process raft request'  (duration: 135.948166ms)","trace[188974757] 'compare'  (duration: 152.093473ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:30:58.71486Z","caller":"traceutil/trace.go:171","msg":"trace[1535796965] transaction","detail":"{read_only:false; response_revision:1747; number_of_response:1; }","duration":"113.089477ms","start":"2024-06-03T12:30:58.60175Z","end":"2024-06-03T12:30:58.714839Z","steps":["trace[1535796965] 'process raft request'  (duration: 112.980677ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:30:59.302347Z","caller":"traceutil/trace.go:171","msg":"trace[2050133570] linearizableReadLoop","detail":"{readStateIndex:1833; appliedIndex:1832; }","duration":"285.517846ms","start":"2024-06-03T12:30:59.01681Z","end":"2024-06-03T12:30:59.302328Z","steps":["trace[2050133570] 'read index received'  (duration: 239.023873ms)","trace[2050133570] 'applied index is now lower than readState.Index'  (duration: 46.493373ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:30:59.302998Z","caller":"traceutil/trace.go:171","msg":"trace[90033170] transaction","detail":"{read_only:false; response_revision:1748; number_of_response:1; }","duration":"406.132535ms","start":"2024-06-03T12:30:58.89685Z","end":"2024-06-03T12:30:59.302982Z","steps":["trace[90033170] 'process raft request'  (duration: 359.031661ms)","trace[90033170] 'compare'  (duration: 46.371773ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T12:30:59.303886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:30:58.896827Z","time spent":"407.010436ms","remote":"127.0.0.1:44518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3kksarpqwv72dd4dch2dxd5wiq\" mod_revision:1706 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3kksarpqwv72dd4dch2dxd5wiq\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3kksarpqwv72dd4dch2dxd5wiq\" > >"}
	{"level":"warn","ts":"2024-06-03T12:30:59.303508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.705548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3625"}
	{"level":"info","ts":"2024-06-03T12:30:59.304462Z","caller":"traceutil/trace.go:171","msg":"trace[709092329] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1748; }","duration":"287.697549ms","start":"2024-06-03T12:30:59.016754Z","end":"2024-06-03T12:30:59.304451Z","steps":["trace[709092329] 'agreement among raft nodes before linearized reading'  (duration: 286.583148ms)"],"step_count":1}
	
	
	==> gcp-auth [5218a97a3cd0] <==
	2024/06/03 12:30:13 GCP Auth Webhook started!
	2024/06/03 12:30:26 Ready to marshal response ...
	2024/06/03 12:30:26 Ready to write response ...
	2024/06/03 12:30:31 Ready to marshal response ...
	2024/06/03 12:30:31 Ready to write response ...
	2024/06/03 12:30:31 Ready to marshal response ...
	2024/06/03 12:30:31 Ready to write response ...
	2024/06/03 12:30:31 Ready to marshal response ...
	2024/06/03 12:30:31 Ready to write response ...
	2024/06/03 12:30:31 Ready to marshal response ...
	2024/06/03 12:30:31 Ready to write response ...
	2024/06/03 12:30:32 Ready to marshal response ...
	2024/06/03 12:30:32 Ready to write response ...
	2024/06/03 12:30:38 Ready to marshal response ...
	2024/06/03 12:30:38 Ready to write response ...
	2024/06/03 12:30:38 Ready to marshal response ...
	2024/06/03 12:30:38 Ready to write response ...
	
	
	==> kernel <==
	 12:31:13 up 7 min,  0 users,  load average: 2.74, 2.22, 1.10
	Linux addons-975100 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8937eb0f2a3a] <==
	W0603 12:29:08.226861       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:09.259512       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:10.293850       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:11.381016       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:12.406070       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:13.496982       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.116.195:443: connect: connection refused
	W0603 12:29:37.193860       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	E0603 12:29:37.193981       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	W0603 12:29:56.302986       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	E0603 12:29:56.303020       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	W0603 12:29:56.374514       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	E0603 12:29:56.374629       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.95.138:443: connect: connection refused
	E0603 12:30:31.121203       1 watch.go:250] http2: stream closed
	I0603 12:30:31.344334       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.114.242"}
	I0603 12:30:31.780676       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0603 12:30:31.848968       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0603 12:30:38.060221       1 trace.go:236] Trace[1898624533]: "Patch" accept:application/json, */*,audit-id:563adb48-b9f1-4bf9-aeb3-470b3a22280c,client:10.244.0.18,api-group:,api-version:v1,name:test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b,subresource:,namespace:my-volcano,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/my-volcano/events/test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b,user-agent:vc-scheduler/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PATCH (03-Jun-2024 12:30:37.405) (total time: 654ms):
	Trace[1898624533]: ["GuaranteedUpdate etcd3" audit-id:563adb48-b9f1-4bf9-aeb3-470b3a22280c,key:/events/my-volcano/test-job-3a693b79-aae4-4649-b4f9-6c713af56aae.17d57e6b8d4f950b,type:*core.Event,resource:events 654ms (12:30:37.405)
	Trace[1898624533]:  ---"initial value restored" 305ms (12:30:37.711)
	Trace[1898624533]:  ---"Txn call completed" 345ms (12:30:38.059)]
	Trace[1898624533]: ---"Object stored in database" 347ms (12:30:38.060)
	Trace[1898624533]: [654.431544ms] [654.431544ms] END
	I0603 12:30:51.123542       1 trace.go:236] Trace[1169272564]: "Get" accept:application/json, */*,audit-id:f777dcac-4f8b-46c5-a07b-eeb9d3bc710d,client:172.22.146.54,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (03-Jun-2024 12:30:50.547) (total time: 575ms):
	Trace[1169272564]: ---"About to write a response" 575ms (12:30:51.123)
	Trace[1169272564]: [575.644181ms] [575.644181ms] END
	
	
	==> kube-controller-manager [a7940ba745bf] <==
	I0603 12:30:00.934787       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 12:30:00.956611       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 12:30:00.981302       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 12:30:01.038245       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:01.776879       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:01.791395       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:01.812648       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:01.822631       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:14.189290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="20.714033ms"
	I0603 12:30:14.192413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="3.071505ms"
	I0603 12:30:30.034352       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 12:30:30.119527       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0603 12:30:30.988603       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	I0603 12:30:31.086720       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:31.281932       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0603 12:30:31.560865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="121.554316ms"
	E0603 12:30:31.560955       1 replica_set.go:557] sync "headlamp/headlamp-68456f997b" failed with pods "headlamp-68456f997b-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0603 12:30:31.720441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="158.18415ms"
	I0603 12:30:31.741955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="21.41022ms"
	I0603 12:30:31.742109       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="79.4µs"
	I0603 12:30:31.776677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="30.4µs"
	I0603 12:30:45.567953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="64.7µs"
	I0603 12:30:45.665937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="43.096422ms"
	I0603 12:30:45.666025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="37.5µs"
	I0603 12:30:50.071517       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="8.9µs"
	
	
	==> kube-proxy [397b9be37ef1] <==
	I0603 12:26:21.958607       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:26:22.905120       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.146.54"]
	I0603 12:26:23.353496       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:26:23.353572       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:26:23.353602       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:26:23.381218       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:26:23.409503       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:26:23.409601       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:26:23.412838       1 config.go:192] "Starting service config controller"
	I0603 12:26:23.412860       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:26:23.413028       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:26:23.413040       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:26:23.414823       1 config.go:319] "Starting node config controller"
	I0603 12:26:23.414844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:26:23.581923       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:26:23.582145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:26:23.517370       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e25f1ac79114] <==
	W0603 12:25:54.105527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:54.105626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:54.174828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:25:54.175073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:25:54.187716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:25:54.187939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:25:54.335279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:25:54.335360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:54.406010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:25:54.406049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:25:54.429807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:25:54.430090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:25:54.440456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:25:54.440484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:25:54.447868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:54.448152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:54.557238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:25:54.557446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:25:54.602096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:25:54.602531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:25:54.609541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:54.609864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:54.621701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:54.621770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 12:25:56.894112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:31:05 addons-975100 kubelet[2110]: E0603 12:31:05.275597    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="939d90c1-71dd-485f-93e3-f555b567a998" containerName="helper-pod"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: E0603 12:31:05.275734    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77d080cc-0158-445b-ac0f-a5c067638727" containerName="registry-proxy"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: E0603 12:31:05.275862    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04ed4d5a-632f-444a-b01c-23b8e51aaa10" containerName="registry"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: E0603 12:31:05.276011    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8712d628-4348-427e-9373-ce7d8f1b2e9b" containerName="nvidia-device-plugin-ctr"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.276469    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ed4d5a-632f-444a-b01c-23b8e51aaa10" containerName="registry"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.276636    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="8712d628-4348-427e-9373-ce7d8f1b2e9b" containerName="nvidia-device-plugin-ctr"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.276817    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d080cc-0158-445b-ac0f-a5c067638727" containerName="registry-proxy"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.276941    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="939d90c1-71dd-485f-93e3-f555b567a998" containerName="helper-pod"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.366583    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-gcp-creds\") pod \"test-local-path\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") " pod="default/test-local-path"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.367140    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bl9\" (UniqueName: \"kubernetes.io/projected/b90d022f-c2be-463b-aa0d-efdd95accf03-kube-api-access-r2bl9\") pod \"test-local-path\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") " pod="default/test-local-path"
	Jun 03 12:31:05 addons-975100 kubelet[2110]: I0603 12:31:05.367455    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\") pod \"test-local-path\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") " pod="default/test-local-path"
	Jun 03 12:31:06 addons-975100 kubelet[2110]: I0603 12:31:06.127329    2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614"
	Jun 03 12:31:06 addons-975100 kubelet[2110]: I0603 12:31:06.127959    2110 scope.go:117] "RemoveContainer" containerID="7a328f3b6b043b080493e1cfddb234d2eb6df70b40027d6fcd910500c6659b8f"
	Jun 03 12:31:06 addons-975100 kubelet[2110]: E0603 12:31:06.128647    2110 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-cgfvw_gadget(17eb3a37-438b-4597-a59d-f7ec22dc0347)\"" pod="gadget/gadget-cgfvw" podUID="17eb3a37-438b-4597-a59d-f7ec22dc0347"
	Jun 03 12:31:06 addons-975100 kubelet[2110]: I0603 12:31:06.327808    2110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939d90c1-71dd-485f-93e3-f555b567a998" path="/var/lib/kubelet/pods/939d90c1-71dd-485f-93e3-f555b567a998/volumes"
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.822573    2110 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-gcp-creds\") pod \"b90d022f-c2be-463b-aa0d-efdd95accf03\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") "
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.822642    2110 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\") pod \"b90d022f-c2be-463b-aa0d-efdd95accf03\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") "
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.822698    2110 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2bl9\" (UniqueName: \"kubernetes.io/projected/b90d022f-c2be-463b-aa0d-efdd95accf03-kube-api-access-r2bl9\") pod \"b90d022f-c2be-463b-aa0d-efdd95accf03\" (UID: \"b90d022f-c2be-463b-aa0d-efdd95accf03\") "
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.823320    2110 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b90d022f-c2be-463b-aa0d-efdd95accf03" (UID: "b90d022f-c2be-463b-aa0d-efdd95accf03"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.823433    2110 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755" (OuterVolumeSpecName: "data") pod "b90d022f-c2be-463b-aa0d-efdd95accf03" (UID: "b90d022f-c2be-463b-aa0d-efdd95accf03"). InnerVolumeSpecName "pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.825579    2110 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90d022f-c2be-463b-aa0d-efdd95accf03-kube-api-access-r2bl9" (OuterVolumeSpecName: "kube-api-access-r2bl9") pod "b90d022f-c2be-463b-aa0d-efdd95accf03" (UID: "b90d022f-c2be-463b-aa0d-efdd95accf03"). InnerVolumeSpecName "kube-api-access-r2bl9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.923760    2110 reconciler_common.go:289] "Volume detached for volume \"pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755\") on node \"addons-975100\" DevicePath \"\""
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.923852    2110 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r2bl9\" (UniqueName: \"kubernetes.io/projected/b90d022f-c2be-463b-aa0d-efdd95accf03-kube-api-access-r2bl9\") on node \"addons-975100\" DevicePath \"\""
	Jun 03 12:31:10 addons-975100 kubelet[2110]: I0603 12:31:10.923886    2110 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b90d022f-c2be-463b-aa0d-efdd95accf03-gcp-creds\") on node \"addons-975100\" DevicePath \"\""
	Jun 03 12:31:11 addons-975100 kubelet[2110]: I0603 12:31:11.525607    2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a31442d52c0ecf55b3d71637e91b06d4fdb6ed85b269873d4fa0243b7754614"
	
	
	==> storage-provisioner [aacbc196bde9] <==
	I0603 12:26:46.777923       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:26:46.898113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:26:46.898155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:26:46.919994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:26:46.925701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-975100_b1ec731a-4fa9-470e-80f4-e614a4a8539d!
	I0603 12:26:46.928154       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86254bd4-d43f-42bd-8add-7101ac3e6f98", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-975100_b1ec731a-4fa9-470e-80f4-e614a4a8539d became leader
	I0603 12:26:47.126049       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-975100_b1ec731a-4fa9-470e-80f4-e614a4a8539d!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:31:03.810189    9460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-975100 -n addons-975100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-975100 -n addons-975100: (13.4504373s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-975100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-pjjnn ingress-nginx-admission-patch-clssn helm-test
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-975100 describe pod ingress-nginx-admission-create-pjjnn ingress-nginx-admission-patch-clssn helm-test
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-975100 describe pod ingress-nginx-admission-create-pjjnn ingress-nginx-admission-patch-clssn helm-test: exit status 1 (176.9442ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pjjnn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-clssn" not found
	Error from server (NotFound): pods "helm-test" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-975100 describe pod ingress-nginx-admission-create-pjjnn ingress-nginx-admission-patch-clssn helm-test: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.91s)

                                                
                                    
x
+
TestErrorSpam/setup (197.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-397300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 --driver=hyperv
E0603 12:35:14.720242   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:14.735257   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:14.751301   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:14.783300   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:14.830528   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:14.925224   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:15.100282   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:15.434179   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:16.083934   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:17.371701   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:19.932580   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:25.054211   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:35.307844   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:35:55.804195   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:36:36.768078   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:37:58.701307   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-397300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 --driver=hyperv: (3m17.6061161s)
error_spam_test.go:96: unexpected stderr: "W0603 12:35:11.844494   10836 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-397300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=19011
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-397300" primary control-plane node in "nospam-397300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-397300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0603 12:35:11.844494   10836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (197.61s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (33.81s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: (12.1470023s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (8.5934718s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:39 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:39 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                     | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-397300                                            | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:41 UTC |
	| start   | -p functional-808300                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:44 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:44 UTC | 03 Jun 24 12:47 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                 |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache delete                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	| ssh     | functional-808300 ssh sudo                                  | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-808300                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache reload                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	| ssh     | functional-808300 ssh                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-808300 kubectl --                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | --context functional-808300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:44:52
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:44:52.346908    6624 out.go:291] Setting OutFile to fd 704 ...
	I0603 12:44:52.347636    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:44:52.347636    6624 out.go:304] Setting ErrFile to fd 768...
	I0603 12:44:52.347636    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:44:52.371214    6624 out.go:298] Setting JSON to false
	I0603 12:44:52.374417    6624 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19620,"bootTime":1717399071,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:44:52.374570    6624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:44:52.379503    6624 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:44:52.382511    6624 notify.go:220] Checking for updates...
	I0603 12:44:52.384112    6624 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:44:52.387526    6624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:44:52.389822    6624 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:44:52.393011    6624 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:44:52.395623    6624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:44:52.399273    6624 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:44:52.399660    6624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:44:57.628600    6624 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:44:57.632079    6624 start.go:297] selected driver: hyperv
	I0603 12:44:57.632079    6624 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:44:57.632079    6624 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:44:57.681493    6624 cni.go:84] Creating CNI manager for ""
	I0603 12:44:57.681691    6624 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:44:57.681994    6624 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:44:57.682398    6624 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:44:57.686513    6624 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:44:57.688649    6624 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:44:57.688649    6624 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:44:57.688649    6624 cache.go:56] Caching tarball of preloaded images
	I0603 12:44:57.689401    6624 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:44:57.689401    6624 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:44:57.689401    6624 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:44:57.691336    6624 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:44:57.692355    6624 start.go:364] duration metric: took 1.0189ms to acquireMachinesLock for "functional-808300"
	I0603 12:44:57.692355    6624 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:44:57.692355    6624 fix.go:54] fixHost starting: 
	I0603 12:44:57.692355    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:00.463720    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:00.463720    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:00.463720    6624 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:45:00.463843    6624 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:45:00.466527    6624 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:45:00.470486    6624 machine.go:94] provisionDockerMachine start ...
	I0603 12:45:00.470486    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:02.633528    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:02.634526    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:02.634526    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:05.217131    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:05.217724    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:05.223157    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:05.223790    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:05.223790    6624 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:45:05.358344    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:45:05.358344    6624 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:45:05.358344    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:07.532175    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:07.532175    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:07.532175    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:10.122258    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:10.122258    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:10.128846    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:10.129068    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:10.129068    6624 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:45:10.286861    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:45:10.286861    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:12.455607    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:12.455607    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:12.456689    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:14.988557    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:14.989455    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:14.995381    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:14.995381    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:14.995381    6624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:45:15.130762    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:45:15.130838    6624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:45:15.130907    6624 buildroot.go:174] setting up certificates
	I0603 12:45:15.130976    6624 provision.go:84] configureAuth start
	I0603 12:45:15.131037    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:17.272324    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:17.273161    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:17.273161    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:19.945482    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:19.946448    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:19.946539    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:22.131399    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:22.132305    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:22.132305    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:24.734547    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:24.735538    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:24.735538    6624 provision.go:143] copyHostCerts
	I0603 12:45:24.735880    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 12:45:24.736381    6624 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:45:24.736505    6624 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:45:24.737070    6624 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:45:24.738563    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 12:45:24.739009    6624 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:45:24.739009    6624 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:45:24.739509    6624 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:45:24.740975    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 12:45:24.741343    6624 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:45:24.741464    6624 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:45:24.742034    6624 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:45:24.743132    6624 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:45:24.955915    6624 provision.go:177] copyRemoteCerts
	I0603 12:45:24.970342    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:45:24.970342    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:27.129511    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:27.129511    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:27.130140    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:29.713755    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:29.713843    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:29.713899    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:45:29.828382    6624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8579995s)
	I0603 12:45:29.828382    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 12:45:29.828382    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:45:29.870100    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 12:45:29.870100    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:45:29.917952    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 12:45:29.918946    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:45:29.959919    6624 provision.go:87] duration metric: took 14.8288197s to configureAuth
	I0603 12:45:29.959919    6624 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:45:29.960924    6624 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:45:29.960924    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:32.127696    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:32.128683    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:32.128683    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:34.694088    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:34.694088    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:34.700809    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:34.701521    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:34.701521    6624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:45:34.846983    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:45:34.846983    6624 buildroot.go:70] root file system type: tmpfs
	I0603 12:45:34.847269    6624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:45:34.847378    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:37.021853    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:37.022803    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:37.022803    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:39.586157    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:39.586368    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:39.591502    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:39.592372    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:39.592513    6624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:45:39.770720    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:45:39.770858    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:41.915807    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:41.916009    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:41.916138    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:44.464781    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:44.464781    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:44.471359    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:44.471359    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:44.471359    6624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:45:44.623569    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:45:44.623569    6624 machine.go:97] duration metric: took 44.1527132s to provisionDockerMachine
	I0603 12:45:44.623569    6624 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:45:44.623569    6624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:45:44.635937    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:45:44.636938    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:46.759122    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:46.759208    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:46.759208    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:49.328594    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:49.328594    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:49.328863    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:45:49.438668    6624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8007632s)
	I0603 12:45:49.452400    6624 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:45:49.459143    6624 command_runner.go:130] > NAME=Buildroot
	I0603 12:45:49.459446    6624 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 12:45:49.459446    6624 command_runner.go:130] > ID=buildroot
	I0603 12:45:49.459446    6624 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 12:45:49.459446    6624 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 12:45:49.459446    6624 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:45:49.459599    6624 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:45:49.459842    6624 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:45:49.461178    6624 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:45:49.461251    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 12:45:49.462262    6624 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:45:49.462334    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> /etc/test/nested/copy/10544/hosts
	I0603 12:45:49.477097    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:45:49.495076    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:45:49.553241    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:45:49.602907    6624 start.go:296] duration metric: took 4.9792975s for postStartSetup
	I0603 12:45:49.603020    6624 fix.go:56] duration metric: took 51.9102312s for fixHost
	I0603 12:45:49.603020    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:51.780260    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:51.780260    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:51.780366    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:54.377661    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:54.377789    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:54.383520    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:54.384060    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:54.384060    6624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:45:54.531396    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418754.544322875
	
	I0603 12:45:54.531396    6624 fix.go:216] guest clock: 1717418754.544322875
	I0603 12:45:54.531396    6624 fix.go:229] Guest: 2024-06-03 12:45:54.544322875 +0000 UTC Remote: 2024-06-03 12:45:49.60302 +0000 UTC m=+57.413606901 (delta=4.941302875s)
	I0603 12:45:54.531605    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:45:56.647747    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:45:56.647747    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:56.648687    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:45:59.222288    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:45:59.222288    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:45:59.229871    6624 main.go:141] libmachine: Using SSH client type: native
	I0603 12:45:59.229936    6624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:45:59.229936    6624 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717418754
	I0603 12:45:59.380336    6624 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:45:54 UTC 2024
	
	I0603 12:45:59.380336    6624 fix.go:236] clock set: Mon Jun  3 12:45:54 UTC 2024
	 (err=<nil>)
	I0603 12:45:59.380336    6624 start.go:83] releasing machines lock for "functional-808300", held for 1m1.6874668s
	I0603 12:45:59.380336    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:01.551213    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:01.551491    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:01.551562    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:46:04.176001    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:46:04.176001    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:04.180551    6624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:46:04.180970    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:04.201433    6624 ssh_runner.go:195] Run: cat /version.json
	I0603 12:46:04.202064    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:06.397647    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:06.397647    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:06.398423    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:46:06.436749    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:06.436749    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:06.436970    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:46:09.097518    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:46:09.097518    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:09.098073    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:46:09.134792    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:46:09.134902    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:09.135035    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:46:09.250421    6624 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 12:46:09.250421    6624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0695072s)
	I0603 12:46:09.250421    6624 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 12:46:09.250421    6624 ssh_runner.go:235] Completed: cat /version.json: (5.0484089s)
	I0603 12:46:09.263969    6624 ssh_runner.go:195] Run: systemctl --version
	I0603 12:46:09.273661    6624 command_runner.go:130] > systemd 252 (252)
	I0603 12:46:09.273661    6624 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 12:46:09.286836    6624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 12:46:09.300083    6624 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 12:46:09.300083    6624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:46:09.313793    6624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:46:09.331707    6624 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:46:09.331751    6624 start.go:494] detecting cgroup driver to use...
	I0603 12:46:09.331992    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:46:09.364977    6624 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 12:46:09.377086    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:46:09.409248    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:46:09.427748    6624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:46:09.443267    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:46:09.477847    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:46:09.514085    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:46:09.551091    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:46:09.588713    6624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:46:09.625490    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:46:09.657227    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:46:09.687352    6624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:46:09.717087    6624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:46:09.733986    6624 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 12:46:09.746020    6624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:46:09.779042    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:10.087287    6624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:46:10.117942    6624 start.go:494] detecting cgroup driver to use...
	I0603 12:46:10.131334    6624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:46:10.157347    6624 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 12:46:10.157446    6624 command_runner.go:130] > [Unit]
	I0603 12:46:10.157446    6624 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 12:46:10.157521    6624 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 12:46:10.157521    6624 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 12:46:10.157521    6624 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 12:46:10.157521    6624 command_runner.go:130] > StartLimitBurst=3
	I0603 12:46:10.157521    6624 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 12:46:10.157521    6624 command_runner.go:130] > [Service]
	I0603 12:46:10.157592    6624 command_runner.go:130] > Type=notify
	I0603 12:46:10.157592    6624 command_runner.go:130] > Restart=on-failure
	I0603 12:46:10.157592    6624 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 12:46:10.157592    6624 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 12:46:10.157657    6624 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 12:46:10.157657    6624 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 12:46:10.157683    6624 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 12:46:10.157713    6624 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 12:46:10.157713    6624 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 12:46:10.157713    6624 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 12:46:10.157713    6624 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 12:46:10.157713    6624 command_runner.go:130] > ExecStart=
	I0603 12:46:10.157713    6624 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 12:46:10.157713    6624 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 12:46:10.157713    6624 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 12:46:10.157713    6624 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 12:46:10.157713    6624 command_runner.go:130] > LimitNOFILE=infinity
	I0603 12:46:10.157713    6624 command_runner.go:130] > LimitNPROC=infinity
	I0603 12:46:10.157713    6624 command_runner.go:130] > LimitCORE=infinity
	I0603 12:46:10.157713    6624 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 12:46:10.157713    6624 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 12:46:10.157713    6624 command_runner.go:130] > TasksMax=infinity
	I0603 12:46:10.157713    6624 command_runner.go:130] > TimeoutStartSec=0
	I0603 12:46:10.157713    6624 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 12:46:10.157713    6624 command_runner.go:130] > Delegate=yes
	I0603 12:46:10.157713    6624 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 12:46:10.157713    6624 command_runner.go:130] > KillMode=process
	I0603 12:46:10.157713    6624 command_runner.go:130] > [Install]
	I0603 12:46:10.157713    6624 command_runner.go:130] > WantedBy=multi-user.target
	I0603 12:46:10.169548    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:46:10.215553    6624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:46:10.262637    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:46:10.300317    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:46:10.325042    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:46:10.361758    6624 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 12:46:10.372754    6624 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:46:10.378801    6624 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 12:46:10.390857    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:46:10.412457    6624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:46:10.459614    6624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:46:10.715023    6624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:46:10.961405    6624 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:46:10.961405    6624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:46:11.008251    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:11.282422    6624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:46:24.117694    6624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8350435s)
	I0603 12:46:24.130126    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 12:46:24.168114    6624 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0603 12:46:24.234851    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 12:46:24.276036    6624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 12:46:24.491267    6624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 12:46:24.700375    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:24.902286    6624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 12:46:24.941307    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 12:46:24.974317    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:25.193547    6624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 12:46:25.310129    6624 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 12:46:25.321803    6624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 12:46:25.336060    6624 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 12:46:25.336832    6624 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 12:46:25.336832    6624 command_runner.go:130] > Device: 0,22	Inode: 1432        Links: 1
	I0603 12:46:25.336832    6624 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 12:46:25.336832    6624 command_runner.go:130] > Access: 2024-06-03 12:46:25.228424819 +0000
	I0603 12:46:25.336832    6624 command_runner.go:130] > Modify: 2024-06-03 12:46:25.228424819 +0000
	I0603 12:46:25.336832    6624 command_runner.go:130] > Change: 2024-06-03 12:46:25.231424752 +0000
	I0603 12:46:25.336832    6624 command_runner.go:130] >  Birth: -
	I0603 12:46:25.336949    6624 start.go:562] Will wait 60s for crictl version
	I0603 12:46:25.349458    6624 ssh_runner.go:195] Run: which crictl
	I0603 12:46:25.354461    6624 command_runner.go:130] > /usr/bin/crictl
	I0603 12:46:25.365458    6624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:46:25.417065    6624 command_runner.go:130] > Version:  0.1.0
	I0603 12:46:25.417065    6624 command_runner.go:130] > RuntimeName:  docker
	I0603 12:46:25.417065    6624 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 12:46:25.417065    6624 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 12:46:25.417065    6624 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 12:46:25.425090    6624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 12:46:25.456156    6624 command_runner.go:130] > 26.0.2
	I0603 12:46:25.465593    6624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 12:46:25.490160    6624 command_runner.go:130] > 26.0.2
	I0603 12:46:25.496160    6624 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 12:46:25.496160    6624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 12:46:25.500155    6624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 12:46:25.500155    6624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 12:46:25.500155    6624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 12:46:25.500155    6624 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 12:46:25.503162    6624 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 12:46:25.503162    6624 ip.go:210] interface addr: 172.22.144.1/20
	I0603 12:46:25.515155    6624 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 12:46:25.521160    6624 command_runner.go:130] > 172.22.144.1	host.minikube.internal
	I0603 12:46:25.521879    6624 kubeadm.go:877] updating cluster {Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:46:25.521879    6624 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:46:25.530745    6624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 12:46:25.551796    6624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:46:25.551889    6624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 12:46:25.551889    6624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:46:25.551980    6624 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 12:46:25.552050    6624 docker.go:615] Images already preloaded, skipping extraction
	I0603 12:46:25.561240    6624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 12:46:25.585614    6624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 12:46:25.585697    6624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 12:46:25.585697    6624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 12:46:25.585697    6624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 12:46:25.585697    6624 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 12:46:25.585797    6624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 12:46:25.585797    6624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 12:46:25.585797    6624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:46:25.585797    6624 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 12:46:25.585913    6624 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:46:25.586019    6624 kubeadm.go:928] updating node { 172.22.146.164 8441 v1.30.1 docker true true} ...
	I0603 12:46:25.586158    6624 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-808300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.146.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:46:25.596587    6624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 12:46:25.626963    6624 command_runner.go:130] > cgroupfs
	I0603 12:46:25.627347    6624 cni.go:84] Creating CNI manager for ""
	I0603 12:46:25.627347    6624 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:46:25.627347    6624 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:46:25.627431    6624 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.146.164 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-808300 NodeName:functional-808300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.146.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.146.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:46:25.627662    6624 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.146.164
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-808300"
	  kubeletExtraArgs:
	    node-ip: 172.22.146.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.146.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:46:25.639441    6624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:46:25.656158    6624 command_runner.go:130] > kubeadm
	I0603 12:46:25.656274    6624 command_runner.go:130] > kubectl
	I0603 12:46:25.656274    6624 command_runner.go:130] > kubelet
	I0603 12:46:25.656274    6624 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:46:25.669188    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:46:25.685179    6624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 12:46:25.714199    6624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:46:25.745032    6624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0603 12:46:25.786766    6624 ssh_runner.go:195] Run: grep 172.22.146.164	control-plane.minikube.internal$ /etc/hosts
	I0603 12:46:25.792564    6624 command_runner.go:130] > 172.22.146.164	control-plane.minikube.internal
	I0603 12:46:25.804660    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:26.002039    6624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:46:26.029101    6624 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300 for IP: 172.22.146.164
	I0603 12:46:26.029101    6624 certs.go:194] generating shared ca certs ...
	I0603 12:46:26.029101    6624 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:46:26.029101    6624 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 12:46:26.029101    6624 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 12:46:26.029101    6624 certs.go:256] generating profile certs ...
	I0603 12:46:26.029101    6624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.key
	I0603 12:46:26.031371    6624 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\apiserver.key.ae4a33a6
	I0603 12:46:26.031656    6624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key
	I0603 12:46:26.031656    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:46:26.031656    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:46:26.031656    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:46:26.032196    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:46:26.032476    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:46:26.032589    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:46:26.032589    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:46:26.032589    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:46:26.033458    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 12:46:26.033458    6624 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 12:46:26.033458    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 12:46:26.034172    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 12:46:26.034172    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 12:46:26.034695    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 12:46:26.034775    6624 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 12:46:26.035300    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:46:26.035510    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 12:46:26.035510    6624 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 12:46:26.036652    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:46:26.090057    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:46:26.135481    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:46:26.180577    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 12:46:26.227523    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:46:26.312413    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:46:26.434865    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:46:26.487054    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 12:46:26.548356    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:46:26.611595    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 12:46:26.663195    6624 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 12:46:26.715765    6624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:46:26.766056    6624 ssh_runner.go:195] Run: openssl version
	I0603 12:46:26.774053    6624 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 12:46:26.786054    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:46:26.821044    6624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:46:26.829562    6624 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:46:26.829631    6624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:46:26.841568    6624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:46:26.850580    6624 command_runner.go:130] > b5213941
	I0603 12:46:26.861826    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:46:26.891857    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 12:46:26.925524    6624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 12:46:26.932333    6624 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 12:46:26.932448    6624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 12:46:26.943966    6624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 12:46:26.954972    6624 command_runner.go:130] > 51391683
	I0603 12:46:26.966032    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 12:46:27.001017    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 12:46:27.035030    6624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 12:46:27.046522    6624 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 12:46:27.047105    6624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 12:46:27.058039    6624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 12:46:27.071442    6624 command_runner.go:130] > 3ec20f2e
	I0603 12:46:27.084685    6624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:46:27.121043    6624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:46:27.131066    6624 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:46:27.131066    6624 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 12:46:27.131066    6624 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0603 12:46:27.131066    6624 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 12:46:27.131066    6624 command_runner.go:130] > Access: 2024-06-03 12:44:17.614922837 +0000
	I0603 12:46:27.131066    6624 command_runner.go:130] > Modify: 2024-06-03 12:44:17.614922837 +0000
	I0603 12:46:27.131066    6624 command_runner.go:130] > Change: 2024-06-03 12:44:17.614922837 +0000
	I0603 12:46:27.131066    6624 command_runner.go:130] >  Birth: 2024-06-03 12:44:17.614922837 +0000
	I0603 12:46:27.145939    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:46:27.176380    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.189996    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:46:27.206015    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.217019    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:46:27.224995    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.235027    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:46:27.244600    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.255601    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:46:27.264603    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.277671    6624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:46:27.287732    6624 command_runner.go:130] > Certificate will not expire
	I0603 12:46:27.288287    6624 kubeadm.go:391] StartCluster: {Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:46:27.298310    6624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 12:46:27.377670    6624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:46:27.399316    6624 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 12:46:27.399316    6624 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 12:46:27.399316    6624 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 12:46:27.399316    6624 command_runner.go:130] > member
	W0603 12:46:27.400320    6624 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 12:46:27.400320    6624 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 12:46:27.400320    6624 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 12:46:27.413319    6624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 12:46:27.427318    6624 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:46:27.428319    6624 kubeconfig.go:125] found "functional-808300" server: "https://172.22.146.164:8441"
	I0603 12:46:27.429323    6624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:46:27.430333    6624 kapi.go:59] client config for functional-808300: &rest.Config{Host:"https://172.22.146.164:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 12:46:27.431317    6624 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 12:46:27.442322    6624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 12:46:27.464897    6624 kubeadm.go:624] The running cluster does not require reconfiguration: 172.22.146.164
	I0603 12:46:27.464897    6624 kubeadm.go:1154] stopping kube-system containers ...
	I0603 12:46:27.473524    6624 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 12:46:27.588887    6624 command_runner.go:130] > 02843dfe5169
	I0603 12:46:27.588971    6624 command_runner.go:130] > 155addeb6f57
	I0603 12:46:27.588971    6624 command_runner.go:130] > 86b73cfdf66c
	I0603 12:46:27.588971    6624 command_runner.go:130] > 75af9fb73ddd
	I0603 12:46:27.588971    6624 command_runner.go:130] > 69c1d2f0cb64
	I0603 12:46:27.588971    6624 command_runner.go:130] > eb74516b16cf
	I0603 12:46:27.588971    6624 command_runner.go:130] > 5d6e5cc420d9
	I0603 12:46:27.588971    6624 command_runner.go:130] > ce20c4c25d18
	I0603 12:46:27.589082    6624 command_runner.go:130] > 943112509e28
	I0603 12:46:27.589082    6624 command_runner.go:130] > 33eefe9c472b
	I0603 12:46:27.589082    6624 command_runner.go:130] > 6992d335d419
	I0603 12:46:27.589082    6624 command_runner.go:130] > 68532ac6c504
	I0603 12:46:27.589082    6624 command_runner.go:130] > 9d93705fdb4a
	I0603 12:46:27.589082    6624 command_runner.go:130] > c4fb3a7c664e
	I0603 12:46:27.589082    6624 command_runner.go:130] > 04d2064bec32
	I0603 12:46:27.589082    6624 command_runner.go:130] > 96a2f05f2230
	I0603 12:46:27.589082    6624 command_runner.go:130] > 1dccd16bf407
	I0603 12:46:27.589082    6624 command_runner.go:130] > 2189bdf4fdf5
	I0603 12:46:27.589082    6624 command_runner.go:130] > 99e6936fbfd3
	I0603 12:46:27.589082    6624 command_runner.go:130] > 27708ce50b04
	I0603 12:46:27.589161    6624 command_runner.go:130] > 23fd19559e87
	I0603 12:46:27.589161    6624 command_runner.go:130] > e4a3d1aad706
	I0603 12:46:27.589161    6624 command_runner.go:130] > d92f2286f410
	I0603 12:46:27.589161    6624 command_runner.go:130] > edfe17d226ba
	I0603 12:46:27.589161    6624 command_runner.go:130] > 455f2c45f264
	I0603 12:46:27.589256    6624 docker.go:483] Stopping containers: [02843dfe5169 155addeb6f57 86b73cfdf66c 75af9fb73ddd 69c1d2f0cb64 eb74516b16cf 5d6e5cc420d9 ce20c4c25d18 943112509e28 33eefe9c472b 6992d335d419 68532ac6c504 9d93705fdb4a c4fb3a7c664e 04d2064bec32 96a2f05f2230 1dccd16bf407 2189bdf4fdf5 99e6936fbfd3 27708ce50b04 23fd19559e87 e4a3d1aad706 d92f2286f410 edfe17d226ba 455f2c45f264]
	I0603 12:46:27.598444    6624 ssh_runner.go:195] Run: docker stop 02843dfe5169 155addeb6f57 86b73cfdf66c 75af9fb73ddd 69c1d2f0cb64 eb74516b16cf 5d6e5cc420d9 ce20c4c25d18 943112509e28 33eefe9c472b 6992d335d419 68532ac6c504 9d93705fdb4a c4fb3a7c664e 04d2064bec32 96a2f05f2230 1dccd16bf407 2189bdf4fdf5 99e6936fbfd3 27708ce50b04 23fd19559e87 e4a3d1aad706 d92f2286f410 edfe17d226ba 455f2c45f264
	I0603 12:46:29.910135    6624 command_runner.go:130] > 02843dfe5169
	I0603 12:46:29.910135    6624 command_runner.go:130] > 155addeb6f57
	I0603 12:46:29.910203    6624 command_runner.go:130] > 86b73cfdf66c
	I0603 12:46:29.910203    6624 command_runner.go:130] > 75af9fb73ddd
	I0603 12:46:29.910203    6624 command_runner.go:130] > 69c1d2f0cb64
	I0603 12:46:29.910203    6624 command_runner.go:130] > eb74516b16cf
	I0603 12:46:29.910203    6624 command_runner.go:130] > 5d6e5cc420d9
	I0603 12:46:29.910203    6624 command_runner.go:130] > ce20c4c25d18
	I0603 12:46:29.910203    6624 command_runner.go:130] > 943112509e28
	I0603 12:46:29.910203    6624 command_runner.go:130] > 33eefe9c472b
	I0603 12:46:29.910203    6624 command_runner.go:130] > 6992d335d419
	I0603 12:46:29.910203    6624 command_runner.go:130] > 68532ac6c504
	I0603 12:46:29.910203    6624 command_runner.go:130] > 9d93705fdb4a
	I0603 12:46:29.910203    6624 command_runner.go:130] > c4fb3a7c664e
	I0603 12:46:29.910203    6624 command_runner.go:130] > 04d2064bec32
	I0603 12:46:29.910203    6624 command_runner.go:130] > 96a2f05f2230
	I0603 12:46:29.910203    6624 command_runner.go:130] > 1dccd16bf407
	I0603 12:46:29.910203    6624 command_runner.go:130] > 2189bdf4fdf5
	I0603 12:46:29.910203    6624 command_runner.go:130] > 99e6936fbfd3
	I0603 12:46:29.910203    6624 command_runner.go:130] > 27708ce50b04
	I0603 12:46:29.910203    6624 command_runner.go:130] > 23fd19559e87
	I0603 12:46:29.910203    6624 command_runner.go:130] > e4a3d1aad706
	I0603 12:46:29.910203    6624 command_runner.go:130] > d92f2286f410
	I0603 12:46:29.910203    6624 command_runner.go:130] > edfe17d226ba
	I0603 12:46:29.910203    6624 command_runner.go:130] > 455f2c45f264
	I0603 12:46:29.910203    6624 ssh_runner.go:235] Completed: docker stop 02843dfe5169 155addeb6f57 86b73cfdf66c 75af9fb73ddd 69c1d2f0cb64 eb74516b16cf 5d6e5cc420d9 ce20c4c25d18 943112509e28 33eefe9c472b 6992d335d419 68532ac6c504 9d93705fdb4a c4fb3a7c664e 04d2064bec32 96a2f05f2230 1dccd16bf407 2189bdf4fdf5 99e6936fbfd3 27708ce50b04 23fd19559e87 e4a3d1aad706 d92f2286f410 edfe17d226ba 455f2c45f264: (2.3117394s)
	I0603 12:46:29.920794    6624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 12:46:29.990860    6624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:46:30.009879    6624 command_runner.go:130] > -rw------- 1 root root 5647 Jun  3 12:44 /etc/kubernetes/admin.conf
	I0603 12:46:30.009879    6624 command_runner.go:130] > -rw------- 1 root root 5658 Jun  3 12:44 /etc/kubernetes/controller-manager.conf
	I0603 12:46:30.009879    6624 command_runner.go:130] > -rw------- 1 root root 2007 Jun  3 12:44 /etc/kubernetes/kubelet.conf
	I0603 12:46:30.009879    6624 command_runner.go:130] > -rw------- 1 root root 5602 Jun  3 12:44 /etc/kubernetes/scheduler.conf
	I0603 12:46:30.009879    6624 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jun  3 12:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jun  3 12:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun  3 12:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jun  3 12:44 /etc/kubernetes/scheduler.conf
	
	I0603 12:46:30.021837    6624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0603 12:46:30.041025    6624 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0603 12:46:30.060601    6624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0603 12:46:30.080541    6624 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0603 12:46:30.091098    6624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0603 12:46:30.109038    6624 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:46:30.121086    6624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:46:30.156997    6624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0603 12:46:30.173260    6624 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:46:30.184812    6624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:46:30.215389    6624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:46:30.240268    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:30.424975    6624 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:46:30.424975    6624 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 12:46:30.424975    6624 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 12:46:30.425118    6624 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 12:46:30.425118    6624 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 12:46:30.425118    6624 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 12:46:30.425118    6624 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 12:46:30.425118    6624 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 12:46:30.425183    6624 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 12:46:30.425183    6624 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 12:46:30.425183    6624 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 12:46:30.425234    6624 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 12:46:30.425256    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:32.279883    6624 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:46:32.279978    6624 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0603 12:46:32.279978    6624 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0603 12:46:32.279978    6624 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0603 12:46:32.279978    6624 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:46:32.279978    6624 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:46:32.280046    6624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.8547143s)
	I0603 12:46:32.280118    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:32.614774    6624 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:46:32.614774    6624 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:46:32.614774    6624 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 12:46:32.614917    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:32.706611    6624 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:46:32.706689    6624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:46:32.706689    6624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:46:32.706770    6624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:46:32.706770    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:32.819998    6624 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:46:32.819998    6624 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:46:32.832015    6624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:46:33.330975    6624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:46:33.838406    6624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:46:33.871408    6624 command_runner.go:130] > 5528
	I0603 12:46:33.871408    6624 api_server.go:72] duration metric: took 1.0514015s to wait for apiserver process to appear ...
	I0603 12:46:33.871408    6624 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:46:33.871408    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:37.200211    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:46:37.200974    6624 api_server.go:103] status: https://172.22.146.164:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:46:37.200974    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:37.263215    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 12:46:37.263702    6624 api_server.go:103] status: https://172.22.146.164:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 12:46:37.378188    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:37.401918    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:46:37.402064    6624 api_server.go:103] status: https://172.22.146.164:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:46:37.884348    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:37.894169    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:46:37.894907    6624 api_server.go:103] status: https://172.22.146.164:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:46:38.379549    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:38.398051    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 12:46:38.398133    6624 api_server.go:103] status: https://172.22.146.164:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 12:46:38.874609    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:38.884809    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 200:
	ok
	I0603 12:46:38.885123    6624 round_trippers.go:463] GET https://172.22.146.164:8441/version
	I0603 12:46:38.885169    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:38.885169    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:38.885169    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:38.899116    6624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 12:46:38.899159    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:38.899159    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:38.899159    6624 round_trippers.go:580]     Content-Length: 263
	I0603 12:46:38.899159    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 12:46:38.899159    6624 round_trippers.go:580]     Audit-Id: e362e387-dbfc-4b1d-9199-47b3b61e0010
	I0603 12:46:38.899200    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:38.899200    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:38.899200    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:38.899234    6624 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 12:46:38.899382    6624 api_server.go:141] control plane version: v1.30.1
	I0603 12:46:38.899434    6624 api_server.go:131] duration metric: took 5.0279835s to wait for apiserver health ...
	I0603 12:46:38.899434    6624 cni.go:84] Creating CNI manager for ""
	I0603 12:46:38.899434    6624 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:46:38.902289    6624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:46:38.914126    6624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:46:38.932021    6624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:46:38.960440    6624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:46:38.960440    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:38.960440    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:38.960440    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:38.960440    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:38.967372    6624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:46:38.967372    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:38.967372    6624 round_trippers.go:580]     Audit-Id: 6296872f-d16c-439f-a784-69a270320e21
	I0603 12:46:38.967372    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:38.967372    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:38.967372    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:38.967372    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:38.967372    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 12:46:38.970149    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"513"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"511","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52398 chars]
	I0603 12:46:38.975598    6624 system_pods.go:59] 7 kube-system pods found
	I0603 12:46:38.975598    6624 system_pods.go:61] "coredns-7db6d8ff4d-42cp7" [2127dc1b-897b-4fd8-9d36-4f67c5018a98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 12:46:38.975598    6624 system_pods.go:61] "etcd-functional-808300" [80851d80-1b91-425f-b72f-4f98683e6778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 12:46:38.975598    6624 system_pods.go:61] "kube-apiserver-functional-808300" [3a5539cf-7aa6-4ff2-9e82-4134e41a13e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 12:46:38.975598    6624 system_pods.go:61] "kube-controller-manager-functional-808300" [15ac4e66-ac8f-4170-b659-55d323432821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 12:46:38.975598    6624 system_pods.go:61] "kube-proxy-66ngx" [9d2a4b61-760c-48da-96bf-18224b420ecc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 12:46:38.975598    6624 system_pods.go:61] "kube-scheduler-functional-808300" [9ed695e8-b04f-4587-b704-bb4caecc3e57] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 12:46:38.975598    6624 system_pods.go:61] "storage-provisioner" [770d8091-cdaf-4c5d-83e4-b93c973a520e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 12:46:38.975598    6624 system_pods.go:74] duration metric: took 15.1571ms to wait for pod list to return data ...
	I0603 12:46:38.975598    6624 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:46:38.975598    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes
	I0603 12:46:38.976138    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:38.976212    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:38.976212    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:38.980177    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:38.980177    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:38.980177    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:38 GMT
	I0603 12:46:38.980177    6624 round_trippers.go:580]     Audit-Id: cc0bc561-c5e1-418e-a59d-d78505a5407d
	I0603 12:46:38.980177    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:38.980177    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:38.980177    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:38.980177    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:38.980177    6624 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"513"},"items":[{"metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0603 12:46:38.981224    6624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:46:38.981224    6624 node_conditions.go:123] node cpu capacity is 2
	I0603 12:46:38.981224    6624 node_conditions.go:105] duration metric: took 5.6262ms to run NodePressure ...
	I0603 12:46:38.981224    6624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 12:46:39.383099    6624 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 12:46:39.383159    6624 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 12:46:39.383159    6624 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 12:46:39.383459    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0603 12:46:39.383494    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.383494    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.383494    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.386440    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:39.386440    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.387320    6624 round_trippers.go:580]     Audit-Id: 85435ad2-e60e-424c-944e-5b8aafa929e0
	I0603 12:46:39.387320    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.387320    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.387320    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.387320    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.387320    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.388195    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31409 chars]
	I0603 12:46:39.389281    6624 kubeadm.go:733] kubelet initialised
	I0603 12:46:39.389811    6624 kubeadm.go:734] duration metric: took 6.5415ms waiting for restarted kubelet to initialise ...
	I0603 12:46:39.389859    6624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:46:39.390009    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:39.390009    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.390072    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.390072    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.394882    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:39.395510    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.395510    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.395510    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.395510    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.395510    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.395510    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.395584    6624 round_trippers.go:580]     Audit-Id: 28accf81-e2d3-4d10-b003-4cda058fd333
	I0603 12:46:39.397941    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52045 chars]
	I0603 12:46:39.400252    6624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:39.400252    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:39.400252    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.400252    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.400252    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.403974    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:39.404032    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.404032    6624 round_trippers.go:580]     Audit-Id: 202c1385-04cc-4e63-aa61-67704bc69d2f
	I0603 12:46:39.404032    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.404032    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.404032    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.404032    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.404113    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.404356    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:39.405028    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:39.405085    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.405085    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.405085    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.407956    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:39.407956    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.407956    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.407956    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.408644    6624 round_trippers.go:580]     Audit-Id: 18545630-a2a0-4f2f-b6d0-4283d37221c4
	I0603 12:46:39.408644    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.408644    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.408644    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.408787    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:39.914163    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:39.914163    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.914163    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.914275    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.920565    6624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:46:39.920637    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.920637    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.920702    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.920702    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.920702    6624 round_trippers.go:580]     Audit-Id: fa961626-97ff-45c8-b47f-982f6e59beed
	I0603 12:46:39.920791    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.920791    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.921548    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:39.921946    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:39.921946    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:39.921946    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:39.921946    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:39.926012    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:39.926012    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:39.926012    6624 round_trippers.go:580]     Audit-Id: 721c3d01-4097-4131-8803-0791cc406640
	I0603 12:46:39.926012    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:39.926012    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:39.926012    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:39.926012    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:39.926012    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:39 GMT
	I0603 12:46:39.926012    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:40.414697    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:40.414905    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:40.414905    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:40.414905    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:40.419482    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:40.419482    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:40.419482    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:40.419482    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:40.419482    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 12:46:40.419482    6624 round_trippers.go:580]     Audit-Id: 2731771a-2aa5-4eed-a3dd-0af7760c879b
	I0603 12:46:40.419482    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:40.419624    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:40.419624    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:40.420330    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:40.420330    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:40.420330    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:40.420330    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:40.422894    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:40.422894    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:40.423668    6624 round_trippers.go:580]     Audit-Id: a0a0dfed-b5c9-4689-93f7-c9daeb602991
	I0603 12:46:40.423668    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:40.423668    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:40.423668    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:40.423668    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:40.423752    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 12:46:40.424205    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:40.912035    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:40.912108    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:40.912108    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:40.912108    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:40.916519    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:40.916621    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:40.916621    6624 round_trippers.go:580]     Audit-Id: e0bc43e9-b61c-43a0-875a-06ddafbe349d
	I0603 12:46:40.916621    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:40.916621    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:40.916621    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:40.916621    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:40.916621    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 12:46:40.916738    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:40.917974    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:40.918069    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:40.918069    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:40.918069    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:40.923033    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:40.923282    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:40.923282    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:40 GMT
	I0603 12:46:40.923282    6624 round_trippers.go:580]     Audit-Id: 28002cd8-5c7d-4972-a10f-d08ae031882d
	I0603 12:46:40.923282    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:40.923282    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:40.923282    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:40.923282    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:40.923282    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:41.413914    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:41.413914    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:41.413914    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:41.413914    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:41.418145    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:41.418939    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:41.418939    6624 round_trippers.go:580]     Audit-Id: 37fdfad4-5e66-4b2b-bff0-419a05745d03
	I0603 12:46:41.418939    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:41.418939    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:41.418939    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:41.418939    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:41.419044    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 12:46:41.419268    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:41.420041    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:41.420041    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:41.420041    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:41.420041    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:41.422625    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:41.423465    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:41.423465    6624 round_trippers.go:580]     Audit-Id: a5e24759-8c2b-4509-802f-a3393a8daeb1
	I0603 12:46:41.423465    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:41.423465    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:41.423465    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:41.423465    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:41.423465    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 12:46:41.423659    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:41.423985    6624 pod_ready.go:102] pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace has status "Ready":"False"
	I0603 12:46:41.913411    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:41.913411    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:41.913411    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:41.913411    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:41.916934    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:41.916934    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:41.916934    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:41.916934    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:41.916934    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 12:46:41.916934    6624 round_trippers.go:580]     Audit-Id: 704b6ae1-5506-486a-997c-d478b5524275
	I0603 12:46:41.916934    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:41.916934    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:41.917887    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:41.918665    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:41.918665    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:41.918665    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:41.918665    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:41.925037    6624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:46:41.925037    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:41.925037    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:41.925037    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:41.925037    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:41.925037    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:41 GMT
	I0603 12:46:41.925037    6624 round_trippers.go:580]     Audit-Id: 73c6f06e-8eab-4059-8bec-7645a6cd1a93
	I0603 12:46:41.925037    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:41.925626    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:42.409407    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:42.409464    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.409464    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.409464    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.412061    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:42.412061    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.412969    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.412969    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.412969    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.412969    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.412969    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.412969    6624 round_trippers.go:580]     Audit-Id: cfb12b4b-091a-40a7-a160-2ee428da69b4
	I0603 12:46:42.413426    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"516","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0603 12:46:42.414278    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:42.414278    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.414278    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.414278    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.416862    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:42.417739    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.417739    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.417811    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.417811    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.417811    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.417811    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.417811    6624 round_trippers.go:580]     Audit-Id: 956684fb-a8b0-4149-86f5-4dc2f4a5fec7
	I0603 12:46:42.417811    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:42.911840    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:42.911840    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.911840    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.911840    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.915512    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:42.915512    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.915512    6624 round_trippers.go:580]     Audit-Id: ecf09015-dcba-4df0-b809-3cc968836445
	I0603 12:46:42.915512    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.915990    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.915990    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.915990    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.916048    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.916280    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"565","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0603 12:46:42.917153    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:42.917153    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.917214    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.917214    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.920105    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:42.920149    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.920149    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.920149    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.920149    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.920149    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.920149    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.920149    6624 round_trippers.go:580]     Audit-Id: f3adc862-b8e9-48a0-90d4-c83e227bbf4c
	I0603 12:46:42.921080    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:42.921080    6624 pod_ready.go:92] pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:42.921080    6624 pod_ready.go:81] duration metric: took 3.5207987s for pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:42.921080    6624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:42.921681    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:42.921767    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.921788    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.921788    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.924671    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:42.924671    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.924871    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.924871    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.924871    6624 round_trippers.go:580]     Audit-Id: 25f0c3de-d927-460d-b78d-f470f1857a71
	I0603 12:46:42.924871    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.924871    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.924871    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.924871    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:42.926670    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:42.926670    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:42.926670    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:42.926670    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:42.929670    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:42.929670    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:42.929670    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:42.929670    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:42.929670    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:42 GMT
	I0603 12:46:42.930351    6624 round_trippers.go:580]     Audit-Id: 0432449f-775e-45e6-9226-6abbdcc33c0a
	I0603 12:46:42.930351    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:42.930351    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:42.930731    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:43.429100    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:43.429100    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:43.429100    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:43.429100    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:43.432688    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:43.433471    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:43.433471    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:43.433471    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:43.433471    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:43.433471    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:43.433471    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 12:46:43.433471    6624 round_trippers.go:580]     Audit-Id: 5d2c6409-58b8-4454-b525-a614de8c2168
	I0603 12:46:43.433937    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:43.434480    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:43.434480    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:43.434684    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:43.434684    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:43.437861    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:43.437861    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:43.437861    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:43.437861    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 12:46:43.437861    6624 round_trippers.go:580]     Audit-Id: 64eec6c5-1ad0-4a50-941e-43237f28db1e
	I0603 12:46:43.437861    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:43.438314    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:43.438314    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:43.438363    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:43.931322    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:43.931611    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:43.931611    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:43.931611    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:43.936209    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:43.936244    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:43.936244    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 12:46:43.936244    6624 round_trippers.go:580]     Audit-Id: efa03d38-60ce-4f4e-936d-beb70fd5ff1e
	I0603 12:46:43.936244    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:43.936244    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:43.936244    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:43.936244    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:43.937105    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:43.938128    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:43.938128    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:43.938218    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:43.938218    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:43.940405    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:43.940810    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:43.940914    6624 round_trippers.go:580]     Audit-Id: dc21773e-bd4f-4388-a694-4b67b6b03c8d
	I0603 12:46:43.940914    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:43.940914    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:43.940914    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:43.940973    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:43.940973    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:43 GMT
	I0603 12:46:43.941067    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:44.435647    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:44.435647    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:44.435727    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:44.435727    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:44.439633    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:44.439633    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:44.439633    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 12:46:44.439633    6624 round_trippers.go:580]     Audit-Id: 77f546cf-6103-414b-8a17-972b3b8937b0
	I0603 12:46:44.439633    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:44.439633    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:44.439633    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:44.439633    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:44.439633    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:44.440654    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:44.440654    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:44.440654    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:44.440654    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:44.443501    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:44.443914    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:44.443914    6624 round_trippers.go:580]     Audit-Id: 9f8fbb90-d921-420a-86a5-ff44ce5e37ef
	I0603 12:46:44.443914    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:44.443914    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:44.443914    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:44.443914    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:44.444012    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 12:46:44.444467    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:44.934021    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:44.934111    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:44.934111    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:44.934111    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:44.938281    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:44.938281    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:44.938281    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:44.938281    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:44.938281    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:44.938281    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 12:46:44.938281    6624 round_trippers.go:580]     Audit-Id: 7aed85a6-e2f4-48d5-8a4d-717842a83257
	I0603 12:46:44.938281    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:44.939844    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:44.940580    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:44.940580    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:44.940580    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:44.940580    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:44.943056    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:44.943773    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:44.943773    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:44 GMT
	I0603 12:46:44.943860    6624 round_trippers.go:580]     Audit-Id: a5e772d8-11f6-4a10-a260-9627cc585be8
	I0603 12:46:44.943860    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:44.943860    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:44.943886    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:44.943886    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:44.943913    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:44.944668    6624 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0603 12:46:45.434666    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:45.434666    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:45.434666    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:45.434744    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:45.440688    6624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:46:45.440763    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:45.440763    6624 round_trippers.go:580]     Audit-Id: 6a49894e-afb8-4b23-96c8-b92258214839
	I0603 12:46:45.440763    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:45.440763    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:45.440763    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:45.440763    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:45.440763    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 12:46:45.440979    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:45.441700    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:45.441764    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:45.441764    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:45.441764    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:45.445500    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:45.445649    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:45.445661    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 12:46:45.445661    6624 round_trippers.go:580]     Audit-Id: 1cbee800-bcb5-407f-8088-90923f438761
	I0603 12:46:45.445686    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:45.445686    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:45.445686    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:45.445686    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:45.445742    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:45.933749    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:45.933966    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:45.933966    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:45.933966    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:45.938481    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:45.938481    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:45.938481    6624 round_trippers.go:580]     Audit-Id: 57f86243-1495-42c0-af11-25bc3c16b4ab
	I0603 12:46:45.938481    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:45.938595    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:45.938595    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:45.938595    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:45.938595    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 12:46:45.938826    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:45.939632    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:45.939804    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:45.939804    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:45.939804    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:45.942174    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:45.942174    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:45.942174    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:45.942174    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:45.942174    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:45.942174    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:45 GMT
	I0603 12:46:45.942943    6624 round_trippers.go:580]     Audit-Id: 6f3abce5-6029-4867-ace5-2d05672acc8a
	I0603 12:46:45.942943    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:45.943113    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:46.435089    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:46.435327    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:46.435327    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:46.435327    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:46.438649    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:46.438649    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:46.439394    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:46.439394    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:46.439394    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:46.439394    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 12:46:46.439394    6624 round_trippers.go:580]     Audit-Id: 28268c7c-3a54-4575-94c0-2ddf0fd4c8d0
	I0603 12:46:46.439394    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:46.440222    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:46.440481    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:46.440481    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:46.440481    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:46.440481    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:46.444091    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:46.444244    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:46.444244    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:46.444244    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:46.444244    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:46.444244    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:46.444244    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 12:46:46.444345    6624 round_trippers.go:580]     Audit-Id: e5d3ebdd-715b-46e6-80f1-dc2b6662b742
	I0603 12:46:46.444651    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:46.930550    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:46.930765    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:46.930765    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:46.930874    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:46.934684    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:46.935312    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:46.935312    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:46.935407    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 12:46:46.935407    6624 round_trippers.go:580]     Audit-Id: 40cf302f-921b-4725-b4d2-9c8268823f77
	I0603 12:46:46.935407    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:46.935407    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:46.935407    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:46.935718    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:46.936955    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:46.936955    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:46.936955    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:46.936955    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:46.939584    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:46.939584    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:46.940262    6624 round_trippers.go:580]     Audit-Id: 8c22cb94-3ca5-4312-bc6e-6933766af9ed
	I0603 12:46:46.940262    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:46.940262    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:46.940262    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:46.940262    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:46.940262    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:46 GMT
	I0603 12:46:46.940262    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:47.430187    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:47.430236    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:47.430236    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:47.430236    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:47.434685    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:47.434685    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:47.434685    6624 round_trippers.go:580]     Audit-Id: bedf1edc-607f-4a2d-acc8-6b4b40987a16
	I0603 12:46:47.434685    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:47.434685    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:47.434685    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:47.434685    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:47.434685    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 12:46:47.435142    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:47.435867    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:47.435867    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:47.435867    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:47.435867    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:47.441689    6624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:46:47.441689    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:47.441689    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 12:46:47.441689    6624 round_trippers.go:580]     Audit-Id: c1f12393-1ed0-4028-ae24-151f765933e1
	I0603 12:46:47.441689    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:47.441689    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:47.441689    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:47.441689    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:47.442223    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:47.442429    6624 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0603 12:46:47.927876    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:47.927953    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:47.927953    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:47.927953    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:47.933748    6624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:46:47.934175    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:47.934175    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:47.934175    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:47.934249    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 12:46:47.934249    6624 round_trippers.go:580]     Audit-Id: 5d7f9804-7178-45a9-82d5-19ca50eec260
	I0603 12:46:47.934249    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:47.934249    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:47.934409    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:47.935001    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:47.935001    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:47.935001    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:47.935001    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:47.939597    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:47.939597    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:47.939597    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:47.939597    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:47.939597    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:47.939597    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:47 GMT
	I0603 12:46:47.939597    6624 round_trippers.go:580]     Audit-Id: 1d037d0d-79a3-4121-b01e-a6a19d63284a
	I0603 12:46:47.939597    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:47.939597    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:48.428792    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:48.428792    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:48.428792    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:48.428792    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:48.433377    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:48.433434    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:48.433434    6624 round_trippers.go:580]     Audit-Id: b6e8f95e-aa91-4ead-9ccf-86598dab2c09
	I0603 12:46:48.433434    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:48.433434    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:48.433434    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:48.433434    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:48.433434    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 12:46:48.433612    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:48.434511    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:48.434511    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:48.434511    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:48.434511    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:48.438057    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:48.438301    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:48.438301    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:48.438301    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:48.438301    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:48.438301    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:48.438301    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 12:46:48.438430    6624 round_trippers.go:580]     Audit-Id: 3d7071b8-2ce9-46b9-986f-e6f8b6f7cc14
	I0603 12:46:48.438483    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:48.929414    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:48.929558    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:48.929558    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:48.929558    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:48.934121    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:48.934211    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:48.934211    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:48.934211    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:48.934211    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 12:46:48.934211    6624 round_trippers.go:580]     Audit-Id: 4c1ef6ce-7ce6-48cd-888a-def71cb91a20
	I0603 12:46:48.934211    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:48.934211    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:48.934970    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:48.935525    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:48.935525    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:48.935525    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:48.935525    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:48.939100    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:48.939178    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:48.939178    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:48.939178    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:48.939178    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:48 GMT
	I0603 12:46:48.939178    6624 round_trippers.go:580]     Audit-Id: c6be5840-8d06-437f-9fa4-35fe967cffd0
	I0603 12:46:48.939178    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:48.939178    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:48.939178    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:49.427610    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:49.427610    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:49.427610    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:49.427610    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:49.432147    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:49.432233    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:49.432233    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:49.432233    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:49.432233    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 12:46:49.432233    6624 round_trippers.go:580]     Audit-Id: 5002df4d-cf43-4847-a476-e730ead1ba69
	I0603 12:46:49.432233    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:49.432233    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:49.432500    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:49.433329    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:49.433329    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:49.433329    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:49.433329    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:49.436111    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:49.436284    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:49.436386    6624 round_trippers.go:580]     Audit-Id: 6977e50a-8705-49ad-9f17-f8e512c44c17
	I0603 12:46:49.436386    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:49.436504    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:49.436504    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:49.436504    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:49.436504    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 12:46:49.436928    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:49.923022    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:49.923089    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:49.923151    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:49.923151    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:49.933907    6624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 12:46:49.933907    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:49.934426    6624 round_trippers.go:580]     Audit-Id: a02581e7-d379-4b7a-b5d7-78106699e3be
	I0603 12:46:49.934426    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:49.934426    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:49.934426    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:49.934426    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:49.934426    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 12:46:49.934779    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"506","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0603 12:46:49.935610    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:49.935756    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:49.935756    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:49.935756    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:49.937969    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:49.937969    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:49.937969    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:49.937969    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:49.937969    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:49.937969    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:49 GMT
	I0603 12:46:49.937969    6624 round_trippers.go:580]     Audit-Id: 63b96a2f-e30f-4dd8-bf14-745b22d405be
	I0603 12:46:49.937969    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:49.939127    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:49.939677    6624 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0603 12:46:50.437535    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:50.437535    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.437535    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.437535    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.441160    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:50.442197    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.442197    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.442197    6624 round_trippers.go:580]     Audit-Id: e73d767a-debb-49b3-bbad-6022f581375e
	I0603 12:46:50.442197    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.442197    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.442197    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.442197    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.442336    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"578","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6520 chars]
	I0603 12:46:50.442881    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.442881    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.442881    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.442881    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.445608    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.445608    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.445608    6624 round_trippers.go:580]     Audit-Id: a95b7623-671a-46a7-b1b8-e1fb88273b06
	I0603 12:46:50.445608    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.446053    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.446053    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.446053    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.446053    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.446459    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.446793    6624 pod_ready.go:92] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:50.446793    6624 pod_ready.go:81] duration metric: took 7.5256498s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.446793    6624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.446793    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0603 12:46:50.446793    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.446793    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.446793    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.454890    6624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:46:50.454890    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.454890    6624 round_trippers.go:580]     Audit-Id: acb86aea-3219-430f-af36-a095972d8c88
	I0603 12:46:50.454890    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.454890    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.454890    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.454890    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.454890    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.455901    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-808300","namespace":"kube-system","uid":"3a5539cf-7aa6-4ff2-9e82-4134e41a13e7","resourceVersion":"507","creationTimestamp":"2024-06-03T12:44:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.146.164:8441","kubernetes.io/config.hash":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.mirror":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.seen":"2024-06-03T12:44:20.681990075Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8303 chars]
	I0603 12:46:50.455901    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.456677    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.456677    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.456677    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.458969    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.459203    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.459203    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.459203    6624 round_trippers.go:580]     Audit-Id: 8025149a-189a-46c2-bd89-47262604e558
	I0603 12:46:50.459203    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.459203    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.459203    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.459203    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.459497    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.953602    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0603 12:46:50.953602    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.953602    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.953718    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.958031    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:50.958031    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.958031    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.958031    6624 round_trippers.go:580]     Audit-Id: 172f37b9-4773-431d-9c1c-475cb207e753
	I0603 12:46:50.958532    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.958532    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.958532    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.958532    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.958759    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-808300","namespace":"kube-system","uid":"3a5539cf-7aa6-4ff2-9e82-4134e41a13e7","resourceVersion":"580","creationTimestamp":"2024-06-03T12:44:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.146.164:8441","kubernetes.io/config.hash":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.mirror":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.seen":"2024-06-03T12:44:20.681990075Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8059 chars]
	I0603 12:46:50.959444    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.959444    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.959444    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.959444    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.963421    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:50.963421    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.963421    6624 round_trippers.go:580]     Audit-Id: dca0b881-0012-44ed-8ca1-6416e1cd7ada
	I0603 12:46:50.963421    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.963588    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.963588    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.963588    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.963588    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.963790    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.963923    6624 pod_ready.go:92] pod "kube-apiserver-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:50.963923    6624 pod_ready.go:81] duration metric: took 517.126ms for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.963923    6624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.963923    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0603 12:46:50.963923    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.963923    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.963923    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.966647    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.967225    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.967225    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.967225    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.967225    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.967225    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.967225    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.967225    6624 round_trippers.go:580]     Audit-Id: 933aaf5b-c627-472d-9409-e77dad492407
	I0603 12:46:50.967618    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"15ac4e66-ac8f-4170-b659-55d323432821","resourceVersion":"571","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73110dc8e3b32662e2416873e3ae2581","kubernetes.io/config.mirror":"73110dc8e3b32662e2416873e3ae2581","kubernetes.io/config.seen":"2024-06-03T12:44:28.599243008Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7617 chars]
	I0603 12:46:50.968125    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.968179    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.968179    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.968179    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.970532    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.970807    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.970807    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.970807    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.970807    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.970807    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.970807    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.970807    6624 round_trippers.go:580]     Audit-Id: e6123f83-64a1-429a-8895-77feebefd470
	I0603 12:46:50.971050    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.971603    6624 pod_ready.go:92] pod "kube-controller-manager-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:50.971603    6624 pod_ready.go:81] duration metric: took 7.6799ms for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.971664    6624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-66ngx" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.971750    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-proxy-66ngx
	I0603 12:46:50.971750    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.971880    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.971880    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.973584    6624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:46:50.974310    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.974381    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.974381    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.974381    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.974381    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.974381    6624 round_trippers.go:580]     Audit-Id: 27c2d8f1-1d09-4be1-af80-d17b457d6de8
	I0603 12:46:50.974381    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.974747    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-66ngx","generateName":"kube-proxy-","namespace":"kube-system","uid":"9d2a4b61-760c-48da-96bf-18224b420ecc","resourceVersion":"517","creationTimestamp":"2024-06-03T12:44:41Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cf9e78eb-3849-4af8-b5ea-398986eafd9f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf9e78eb-3849-4af8-b5ea-398986eafd9f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0603 12:46:50.975183    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.975183    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.975183    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.975183    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.978450    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:50.978509    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.978509    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.978509    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.978509    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.978509    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.978509    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.978509    6624 round_trippers.go:580]     Audit-Id: b6302f0f-7206-40d5-bfb8-f0ecd98910e1
	I0603 12:46:50.978762    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.978913    6624 pod_ready.go:92] pod "kube-proxy-66ngx" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:50.978913    6624 pod_ready.go:81] duration metric: took 7.2481ms for pod "kube-proxy-66ngx" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.978913    6624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.978913    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0603 12:46:50.978913    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.978913    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.978913    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.981526    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.981526    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.981526    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.981526    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.981526    6624 round_trippers.go:580]     Audit-Id: 95ac2a0a-b09c-48c1-8493-819dc38b6e2c
	I0603 12:46:50.981526    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.981526    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.981526    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.982223    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"9ed695e8-b04f-4587-b704-bb4caecc3e57","resourceVersion":"567","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bc28fa7dc16cbf596fb8051c5a6b8fb1","kubernetes.io/config.mirror":"bc28fa7dc16cbf596fb8051c5a6b8fb1","kubernetes.io/config.seen":"2024-06-03T12:44:20.681992175Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0603 12:46:50.982726    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:50.982792    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:50.982792    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:50.982792    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:50.985476    6624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:46:50.985604    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:50.985604    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:50.985604    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:50.985604    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:50.985709    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:50 GMT
	I0603 12:46:50.985709    6624 round_trippers.go:580]     Audit-Id: fe6d8b1c-d3b6-4ee7-ad66-e8f4b1cee064
	I0603 12:46:50.985709    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:50.986055    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:50.986212    6624 pod_ready.go:92] pod "kube-scheduler-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:50.986212    6624 pod_ready.go:81] duration metric: took 7.2993ms for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:50.986212    6624 pod_ready.go:38] duration metric: took 11.5962551s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:46:50.986212    6624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:46:51.005483    6624 command_runner.go:130] > -16
	I0603 12:46:51.005483    6624 ops.go:34] apiserver oom_adj: -16
	I0603 12:46:51.005483    6624 kubeadm.go:591] duration metric: took 23.6049644s to restartPrimaryControlPlane
	I0603 12:46:51.005483    6624 kubeadm.go:393] duration metric: took 23.7170872s to StartCluster
	I0603 12:46:51.005618    6624 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:46:51.006035    6624 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:46:51.007808    6624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:46:51.009142    6624 start.go:234] Will wait 6m0s for node &{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 12:46:51.009142    6624 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:46:51.013805    6624 out.go:177] * Verifying Kubernetes components...
	I0603 12:46:51.009687    6624 addons.go:69] Setting default-storageclass=true in profile "functional-808300"
	I0603 12:46:51.009142    6624 addons.go:69] Setting storage-provisioner=true in profile "functional-808300"
	I0603 12:46:51.009928    6624 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:46:51.013805    6624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-808300"
	I0603 12:46:51.016298    6624 addons.go:234] Setting addon storage-provisioner=true in "functional-808300"
	W0603 12:46:51.016298    6624 addons.go:243] addon storage-provisioner should already be in state true
	I0603 12:46:51.016298    6624 host.go:66] Checking if "functional-808300" exists ...
	I0603 12:46:51.017219    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:51.017352    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:51.031942    6624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:46:51.317175    6624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:46:51.343038    6624 node_ready.go:35] waiting up to 6m0s for node "functional-808300" to be "Ready" ...
	I0603 12:46:51.343278    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:51.343278    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:51.343278    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:51.343278    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:51.346965    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:51.347976    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:51.347976    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 12:46:51.347976    6624 round_trippers.go:580]     Audit-Id: bf3d3f74-8154-4092-a40f-5b6f7d22c42a
	I0603 12:46:51.347976    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:51.347976    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:51.347976    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:51.347976    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:51.347976    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:51.347976    6624 node_ready.go:49] node "functional-808300" has status "Ready":"True"
	I0603 12:46:51.347976    6624 node_ready.go:38] duration metric: took 4.8521ms for node "functional-808300" to be "Ready" ...
	I0603 12:46:51.347976    6624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:46:51.348973    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:51.348973    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:51.348973    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:51.348973    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:51.353973    6624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:46:51.353973    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:51.353973    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:51.354263    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:51.354263    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:51.354263    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 12:46:51.354263    6624 round_trippers.go:580]     Audit-Id: 16c29138-cab3-4af8-ada3-5a239fdc847d
	I0603 12:46:51.354263    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:51.356107    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"565","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50842 chars]
	I0603 12:46:51.358985    6624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:51.443642    6624 request.go:629] Waited for 84.5095ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:51.443886    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-42cp7
	I0603 12:46:51.444107    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:51.444107    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:51.444107    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:51.448832    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:51.448832    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:51.448972    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:51.448972    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 12:46:51.448972    6624 round_trippers.go:580]     Audit-Id: c97265f5-c7d5-409a-87c5-bb4038cd1410
	I0603 12:46:51.448972    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:51.448972    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:51.448972    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:51.449315    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"565","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0603 12:46:51.649777    6624 request.go:629] Waited for 199.837ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:51.650099    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:51.650144    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:51.650144    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:51.650144    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:51.654773    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:51.654773    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:51.654872    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:51.654872    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:51.654872    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 12:46:51.654872    6624 round_trippers.go:580]     Audit-Id: 0813af4b-9213-447e-a0ae-4cc12a52462c
	I0603 12:46:51.654951    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:51.655004    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:51.655320    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:51.656161    6624 pod_ready.go:92] pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:51.656161    6624 pod_ready.go:81] duration metric: took 297.1002ms for pod "coredns-7db6d8ff4d-42cp7" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:51.656161    6624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:51.842810    6624 request.go:629] Waited for 186.6477ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:51.843020    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0603 12:46:51.843020    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:51.843020    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:51.843020    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:51.846735    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:51.846735    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:51.846735    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:51.847166    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:51.847166    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:51.847166    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:51 GMT
	I0603 12:46:51.847166    6624 round_trippers.go:580]     Audit-Id: 90dfc23f-397c-46e9-91c4-b8be612c1980
	I0603 12:46:51.847166    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:51.848032    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"80851d80-1b91-425f-b72f-4f98683e6778","resourceVersion":"578","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.146.164:2379","kubernetes.io/config.hash":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.mirror":"bbe69fe3ee69755b32446ace652cadef","kubernetes.io/config.seen":"2024-06-03T12:44:20.681986375Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6520 chars]
	I0603 12:46:52.047690    6624 request.go:629] Waited for 198.8686ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.047842    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.047842    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:52.047842    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:52.047842    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:52.051471    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:52.051471    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:52.051471    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:52.051713    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:52.051713    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:52.051713    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:52.051713    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 12:46:52.051713    6624 round_trippers.go:580]     Audit-Id: 8449b0bf-1e97-491a-bde9-cb4d48abe7ab
	I0603 12:46:52.052436    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:52.053144    6624 pod_ready.go:92] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:52.053144    6624 pod_ready.go:81] duration metric: took 396.98ms for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:52.053248    6624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:52.237979    6624 request.go:629] Waited for 184.5043ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0603 12:46:52.238066    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0603 12:46:52.238150    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:52.238150    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:52.238150    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:52.241751    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:52.241751    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:52.241961    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:52.241961    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:52.241961    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 12:46:52.241961    6624 round_trippers.go:580]     Audit-Id: 8bacb28e-6b75-491a-a31e-724b9b97c211
	I0603 12:46:52.241961    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:52.241961    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:52.242235    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-808300","namespace":"kube-system","uid":"3a5539cf-7aa6-4ff2-9e82-4134e41a13e7","resourceVersion":"580","creationTimestamp":"2024-06-03T12:44:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.146.164:8441","kubernetes.io/config.hash":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.mirror":"11918179ce61499bb08bfc780760a360","kubernetes.io/config.seen":"2024-06-03T12:44:20.681990075Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8059 chars]
	I0603 12:46:52.444817    6624 request.go:629] Waited for 201.6113ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.445041    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.445102    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:52.445102    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:52.445102    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:52.448658    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:52.449239    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:52.449239    6624 round_trippers.go:580]     Audit-Id: 5f02eb49-9a13-4d82-b76c-eea6a46040cd
	I0603 12:46:52.449239    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:52.449239    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:52.449239    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:52.449239    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:52.449239    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 12:46:52.449483    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:52.449629    6624 pod_ready.go:92] pod "kube-apiserver-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:52.449629    6624 pod_ready.go:81] duration metric: took 396.3784ms for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:52.449629    6624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:52.651824    6624 request.go:629] Waited for 201.3427ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0603 12:46:52.651909    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0603 12:46:52.651909    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:52.652053    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:52.652053    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:52.656078    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:52.656682    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:52.656682    6624 round_trippers.go:580]     Audit-Id: 8bad5b30-52da-4404-af0b-3c943f98ed8a
	I0603 12:46:52.656682    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:52.656682    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:52.656682    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:52.656682    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:52.656682    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 12:46:52.657079    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"15ac4e66-ac8f-4170-b659-55d323432821","resourceVersion":"571","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73110dc8e3b32662e2416873e3ae2581","kubernetes.io/config.mirror":"73110dc8e3b32662e2416873e3ae2581","kubernetes.io/config.seen":"2024-06-03T12:44:28.599243008Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7617 chars]
	I0603 12:46:52.842746    6624 request.go:629] Waited for 184.6939ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.843075    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:52.843075    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:52.843075    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:52.843155    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:52.846747    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:52.846747    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:52.846747    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:52 GMT
	I0603 12:46:52.846747    6624 round_trippers.go:580]     Audit-Id: 7a7f33c6-4c17-442b-a9e0-2772deae501a
	I0603 12:46:52.847043    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:52.847043    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:52.847043    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:52.847043    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:52.847285    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:52.847285    6624 pod_ready.go:92] pod "kube-controller-manager-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:52.847285    6624 pod_ready.go:81] duration metric: took 397.6521ms for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:52.847828    6624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-66ngx" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:53.049262    6624 request.go:629] Waited for 201.1004ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-proxy-66ngx
	I0603 12:46:53.049262    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-proxy-66ngx
	I0603 12:46:53.049262    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.049262    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.049262    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.053614    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:53.054463    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.054530    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.054530    6624 round_trippers.go:580]     Audit-Id: 389d6470-d8ad-4b30-8312-15c0cd2ac4fb
	I0603 12:46:53.054530    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.054530    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.054530    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.054578    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.054682    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-66ngx","generateName":"kube-proxy-","namespace":"kube-system","uid":"9d2a4b61-760c-48da-96bf-18224b420ecc","resourceVersion":"517","creationTimestamp":"2024-06-03T12:44:41Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cf9e78eb-3849-4af8-b5ea-398986eafd9f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf9e78eb-3849-4af8-b5ea-398986eafd9f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0603 12:46:53.240869    6624 request.go:629] Waited for 185.3698ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:53.241096    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:53.241096    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.241096    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.241096    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.244681    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:53.245062    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.245062    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.245062    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.245129    6624 round_trippers.go:580]     Audit-Id: e3e4ee8c-32ae-4713-bba8-01d40c9f913e
	I0603 12:46:53.245129    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.245129    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.245129    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.245129    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:53.245666    6624 pod_ready.go:92] pod "kube-proxy-66ngx" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:53.245666    6624 pod_ready.go:81] duration metric: took 397.8352ms for pod "kube-proxy-66ngx" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:53.245666    6624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:53.301039    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:53.301039    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:53.305278    6624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:46:53.301822    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:53.307653    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:53.307731    6624 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:46:53.307731    6624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:46:53.307731    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:53.308282    6624 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:46:53.308509    6624 kapi.go:59] client config for functional-808300: &rest.Config{Host:"https://172.22.146.164:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 12:46:53.309393    6624 addons.go:234] Setting addon default-storageclass=true in "functional-808300"
	W0603 12:46:53.309393    6624 addons.go:243] addon default-storageclass should already be in state true
	I0603 12:46:53.309393    6624 host.go:66] Checking if "functional-808300" exists ...
	I0603 12:46:53.310237    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:53.446488    6624 request.go:629] Waited for 200.8196ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0603 12:46:53.446757    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0603 12:46:53.446757    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.446757    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.446757    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.453922    6624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 12:46:53.453994    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.454061    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.454061    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.454061    6624 round_trippers.go:580]     Audit-Id: a9460c5c-bbb1-45dd-a8d2-16baefb8b383
	I0603 12:46:53.454061    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.454061    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.454125    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.455448    6624 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"9ed695e8-b04f-4587-b704-bb4caecc3e57","resourceVersion":"567","creationTimestamp":"2024-06-03T12:44:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bc28fa7dc16cbf596fb8051c5a6b8fb1","kubernetes.io/config.mirror":"bc28fa7dc16cbf596fb8051c5a6b8fb1","kubernetes.io/config.seen":"2024-06-03T12:44:20.681992175Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0603 12:46:53.652599    6624 request.go:629] Waited for 196.2204ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:53.652799    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes/functional-808300
	I0603 12:46:53.652900    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.652900    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.652977    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.657468    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:53.658379    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.658379    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.658379    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.658379    6624 round_trippers.go:580]     Audit-Id: 0e444db5-cab2-4f6c-8496-75f942648096
	I0603 12:46:53.658379    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.658379    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.658379    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.658895    6624 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-03T12:44:25Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0603 12:46:53.659492    6624 pod_ready.go:92] pod "kube-scheduler-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0603 12:46:53.659575    6624 pod_ready.go:81] duration metric: took 413.9056ms for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0603 12:46:53.659575    6624 pod_ready.go:38] duration metric: took 2.3115803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:46:53.659683    6624 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:46:53.676725    6624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:46:53.703898    6624 command_runner.go:130] > 5528
	I0603 12:46:53.703898    6624 api_server.go:72] duration metric: took 2.6947339s to wait for apiserver process to appear ...
	I0603 12:46:53.703898    6624 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:46:53.703898    6624 api_server.go:253] Checking apiserver healthz at https://172.22.146.164:8441/healthz ...
	I0603 12:46:53.711932    6624 api_server.go:279] https://172.22.146.164:8441/healthz returned 200:
	ok
	I0603 12:46:53.712593    6624 round_trippers.go:463] GET https://172.22.146.164:8441/version
	I0603 12:46:53.712593    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.712593    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.712711    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.713946    6624 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:46:53.714778    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.714778    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.714778    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.714778    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.714778    6624 round_trippers.go:580]     Content-Length: 263
	I0603 12:46:53.714778    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.714778    6624 round_trippers.go:580]     Audit-Id: 4dfadc6b-965e-4b1b-ae97-578a33f54bd2
	I0603 12:46:53.714778    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.714894    6624 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 12:46:53.714997    6624 api_server.go:141] control plane version: v1.30.1
	I0603 12:46:53.715073    6624 api_server.go:131] duration metric: took 11.174ms to wait for apiserver health ...
	I0603 12:46:53.715073    6624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:46:53.841375    6624 request.go:629] Waited for 126.3012ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:53.841375    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:53.841375    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:53.841375    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:53.841375    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:53.846586    6624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:46:53.846586    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:53.846586    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:53.846586    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:53 GMT
	I0603 12:46:53.846586    6624 round_trippers.go:580]     Audit-Id: bb05c447-fcc2-4132-97c1-aa7ffd52b8bd
	I0603 12:46:53.846586    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:53.847587    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:53.847587    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:53.848956    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"565","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50842 chars]
	I0603 12:46:53.851531    6624 system_pods.go:59] 7 kube-system pods found
	I0603 12:46:53.851574    6624 system_pods.go:61] "coredns-7db6d8ff4d-42cp7" [2127dc1b-897b-4fd8-9d36-4f67c5018a98] Running
	I0603 12:46:53.851574    6624 system_pods.go:61] "etcd-functional-808300" [80851d80-1b91-425f-b72f-4f98683e6778] Running
	I0603 12:46:53.851574    6624 system_pods.go:61] "kube-apiserver-functional-808300" [3a5539cf-7aa6-4ff2-9e82-4134e41a13e7] Running
	I0603 12:46:53.851574    6624 system_pods.go:61] "kube-controller-manager-functional-808300" [15ac4e66-ac8f-4170-b659-55d323432821] Running
	I0603 12:46:53.851642    6624 system_pods.go:61] "kube-proxy-66ngx" [9d2a4b61-760c-48da-96bf-18224b420ecc] Running
	I0603 12:46:53.851642    6624 system_pods.go:61] "kube-scheduler-functional-808300" [9ed695e8-b04f-4587-b704-bb4caecc3e57] Running
	I0603 12:46:53.851642    6624 system_pods.go:61] "storage-provisioner" [770d8091-cdaf-4c5d-83e4-b93c973a520e] Running
	I0603 12:46:53.851642    6624 system_pods.go:74] duration metric: took 136.5684ms to wait for pod list to return data ...
	I0603 12:46:53.851685    6624 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:46:54.047417    6624 request.go:629] Waited for 195.3037ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/default/serviceaccounts
	I0603 12:46:54.047482    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/default/serviceaccounts
	I0603 12:46:54.047482    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:54.047482    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:54.047482    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:54.051084    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:46:54.051084    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:54.051084    6624 round_trippers.go:580]     Audit-Id: 14742e01-8596-4dcc-9d5a-61afdd8255fb
	I0603 12:46:54.051084    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:54.051084    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:54.051084    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:54.051084    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:54.051084    6624 round_trippers.go:580]     Content-Length: 261
	I0603 12:46:54.051679    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 12:46:54.051829    6624 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a34c6f40-db55-4f29-9205-002798b502d2","resourceVersion":"338","creationTimestamp":"2024-06-03T12:44:41Z"}}]}
	I0603 12:46:54.051900    6624 default_sa.go:45] found service account: "default"
	I0603 12:46:54.051900    6624 default_sa.go:55] duration metric: took 200.2133ms for default service account to be created ...
	I0603 12:46:54.051900    6624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:46:54.254246    6624 request.go:629] Waited for 202.1264ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:54.254319    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods
	I0603 12:46:54.254319    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:54.254319    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:54.254319    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:54.259912    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:54.259912    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:54.259990    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 12:46:54.259990    6624 round_trippers.go:580]     Audit-Id: b427dd01-ecfa-4e15-90eb-9cb7f13594b0
	I0603 12:46:54.259990    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:54.259990    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:54.259990    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:54.259990    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:54.260912    6624 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-42cp7","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2127dc1b-897b-4fd8-9d36-4f67c5018a98","resourceVersion":"565","creationTimestamp":"2024-06-03T12:44:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a581bb35-2553-412e-8a84-97fa52ff043f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T12:44:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a581bb35-2553-412e-8a84-97fa52ff043f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50842 chars]
	I0603 12:46:54.263335    6624 system_pods.go:86] 7 kube-system pods found
	I0603 12:46:54.263335    6624 system_pods.go:89] "coredns-7db6d8ff4d-42cp7" [2127dc1b-897b-4fd8-9d36-4f67c5018a98] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "etcd-functional-808300" [80851d80-1b91-425f-b72f-4f98683e6778] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "kube-apiserver-functional-808300" [3a5539cf-7aa6-4ff2-9e82-4134e41a13e7] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "kube-controller-manager-functional-808300" [15ac4e66-ac8f-4170-b659-55d323432821] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "kube-proxy-66ngx" [9d2a4b61-760c-48da-96bf-18224b420ecc] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "kube-scheduler-functional-808300" [9ed695e8-b04f-4587-b704-bb4caecc3e57] Running
	I0603 12:46:54.263411    6624 system_pods.go:89] "storage-provisioner" [770d8091-cdaf-4c5d-83e4-b93c973a520e] Running
	I0603 12:46:54.263411    6624 system_pods.go:126] duration metric: took 211.5095ms to wait for k8s-apps to be running ...
	I0603 12:46:54.263411    6624 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:46:54.275680    6624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:46:54.301637    6624 system_svc.go:56] duration metric: took 38.2257ms WaitForService to wait for kubelet
	I0603 12:46:54.301637    6624 kubeadm.go:576] duration metric: took 3.2924673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:46:54.301637    6624 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:46:54.442224    6624 request.go:629] Waited for 140.5857ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.146.164:8441/api/v1/nodes
	I0603 12:46:54.442224    6624 round_trippers.go:463] GET https://172.22.146.164:8441/api/v1/nodes
	I0603 12:46:54.442224    6624 round_trippers.go:469] Request Headers:
	I0603 12:46:54.442224    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:46:54.442224    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:46:54.446238    6624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:46:54.446297    6624 round_trippers.go:577] Response Headers:
	I0603 12:46:54.446297    6624 round_trippers.go:580]     Audit-Id: a440b417-b497-4f3f-8271-d73827c69d19
	I0603 12:46:54.446297    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:46:54.446297    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:46:54.446297    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:46:54.446367    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:46:54.446367    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:46:54 GMT
	I0603 12:46:54.446425    6624 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"functional-808300","uid":"8b315539-aadc-49c3-98c7-e09603ab5739","resourceVersion":"503","creationTimestamp":"2024-06-03T12:44:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T12_44_29_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0603 12:46:54.452361    6624 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:46:54.452515    6624 node_conditions.go:123] node cpu capacity is 2
	I0603 12:46:54.452515    6624 node_conditions.go:105] duration metric: took 150.8764ms to run NodePressure ...
	I0603 12:46:54.452515    6624 start.go:240] waiting for startup goroutines ...
	I0603 12:46:55.575879    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:55.575879    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:55.575879    6624 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:46:55.575879    6624 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:46:55.576082    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:46:55.576839    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:55.576839    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:55.576839    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:46:57.819151    6624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:46:57.819965    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:57.820038    6624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:46:58.244526    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:46:58.244526    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:46:58.244526    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:46:58.390935    6624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:46:59.207507    6624 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0603 12:46:59.207592    6624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0603 12:46:59.207654    6624 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0603 12:46:59.207654    6624 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0603 12:46:59.207654    6624 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0603 12:46:59.207714    6624 command_runner.go:130] > pod/storage-provisioner configured
	I0603 12:47:00.377054    6624 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:47:00.377811    6624 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:47:00.377996    6624 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:47:00.527297    6624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:47:00.692040    6624 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0603 12:47:00.692293    6624 round_trippers.go:463] GET https://172.22.146.164:8441/apis/storage.k8s.io/v1/storageclasses
	I0603 12:47:00.692403    6624 round_trippers.go:469] Request Headers:
	I0603 12:47:00.692403    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:47:00.692403    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:47:00.698573    6624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:47:00.698573    6624 round_trippers.go:577] Response Headers:
	I0603 12:47:00.698573    6624 round_trippers.go:580]     Audit-Id: 9284ea9d-4b39-4bf3-9a40-b56d8e896f71
	I0603 12:47:00.698573    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:47:00.698573    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:47:00.698573    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:47:00.698573    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:47:00.698573    6624 round_trippers.go:580]     Content-Length: 1273
	I0603 12:47:00.698573    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 12:47:00.698573    6624 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"587"},"items":[{"metadata":{"name":"standard","uid":"106e8306-94db-4d36-a289-fdeede501dc4","resourceVersion":"437","creationTimestamp":"2024-06-03T12:44:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0603 12:47:00.700255    6624 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"106e8306-94db-4d36-a289-fdeede501dc4","resourceVersion":"437","creationTimestamp":"2024-06-03T12:44:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 12:47:00.700333    6624 round_trippers.go:463] PUT https://172.22.146.164:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 12:47:00.700400    6624 round_trippers.go:469] Request Headers:
	I0603 12:47:00.700400    6624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 12:47:00.700431    6624 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:47:00.700431    6624 round_trippers.go:473]     Content-Type: application/json
	I0603 12:47:00.704181    6624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:47:00.704181    6624 round_trippers.go:577] Response Headers:
	I0603 12:47:00.704181    6624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 16bb201c-b0ef-4147-af9d-a6ab3e49b4b1
	I0603 12:47:00.704181    6624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dcb8f7d5-b35e-4498-b95b-281672f384cf
	I0603 12:47:00.704181    6624 round_trippers.go:580]     Content-Length: 1220
	I0603 12:47:00.704181    6624 round_trippers.go:580]     Date: Mon, 03 Jun 2024 12:47:00 GMT
	I0603 12:47:00.704181    6624 round_trippers.go:580]     Audit-Id: 463bfccd-a53d-495b-baa7-a7d42808006e
	I0603 12:47:00.704181    6624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 12:47:00.704181    6624 round_trippers.go:580]     Content-Type: application/json
	I0603 12:47:00.704181    6624 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"106e8306-94db-4d36-a289-fdeede501dc4","resourceVersion":"437","creationTimestamp":"2024-06-03T12:44:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T12:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 12:47:00.711771    6624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 12:47:00.714037    6624 addons.go:510] duration metric: took 9.7048137s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 12:47:00.714037    6624 start.go:245] waiting for cluster config update ...
	I0603 12:47:00.714037    6624 start.go:254] writing updated cluster config ...
	I0603 12:47:00.726173    6624 ssh_runner.go:195] Run: rm -f paused
	I0603 12:47:00.868549    6624 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:47:00.872153    6624 out.go:177] * Done! kubectl is now configured to use "functional-808300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:37 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:46:37Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1ff0e8444e017       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   c5bda73a13795       coredns-7db6d8ff4d-42cp7
	be000e19e002b       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   8a2a7c2d993df       storage-provisioner
	f452cbb268759       747097150317f       2 minutes ago       Running             kube-proxy                2                   dc04e82865964       kube-proxy-66ngx
	75f43b1538ea8       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            2                   e13d219adabb0       kube-scheduler-functional-808300
	1f3d2239938b2       91be940803172       2 minutes ago       Running             kube-apiserver            2                   0d1392b7a5869       kube-apiserver-functional-808300
	83b5eb4ecd28f       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   21d1a639c77e5       etcd-functional-808300
	dcdcc621dd5c6       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   2                   2c63105d6657d       kube-controller-manager-functional-808300
	83c4519534936       3861cfcd7c04c       2 minutes ago       Created             etcd                      1                   eb74516b16cf4       etcd-functional-808300
	eade14c1c5b68       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   86b73cfdf66cf       storage-provisioner
	2fe782b706294       747097150317f       2 minutes ago       Created             kube-proxy                1                   75af9fb73dddf       kube-proxy-66ngx
	577e1c60911fa       91be940803172       2 minutes ago       Created             kube-apiserver            1                   69c1d2f0cb64c       kube-apiserver-functional-808300
	65d6796adbfbe       25a1387cdab82       2 minutes ago       Created             kube-controller-manager   1                   5d6e5cc420d96       kube-controller-manager-functional-808300
	02843dfe5169f       a52dc94f0a912       2 minutes ago       Exited              kube-scheduler            1                   ce20c4c25d181       kube-scheduler-functional-808300
	c4fb3a7c664e6       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   04d2064bec327       coredns-7db6d8ff4d-42cp7
	
	
	==> coredns [1ff0e8444e01] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52043 - 49732 "HINFO IN 3756941186989265594.5795568095872067501. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02519134s
	
	
	==> coredns [c4fb3a7c664e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37496 - 49769 "HINFO IN 7237563384337257517.68881939644737712. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.024902538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-808300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-808300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=functional-808300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_44_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:44:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-808300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:48:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:48:39 +0000   Mon, 03 Jun 2024 12:44:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:48:39 +0000   Mon, 03 Jun 2024 12:44:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:48:39 +0000   Mon, 03 Jun 2024 12:44:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:48:39 +0000   Mon, 03 Jun 2024 12:44:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.146.164
	  Hostname:    functional-808300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 45db94a223f344f8a4223037c13663a5
	  System UUID:                f8cbfd1c-c122-8c47-90f8-20d17c162b47
	  Boot ID:                    3d5a6fb7-bd7f-4a97-a435-a7c6917c91b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-42cp7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m4s
	  kube-system                 etcd-functional-808300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-functional-808300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-functional-808300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-66ngx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-functional-808300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node functional-808300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node functional-808300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node functional-808300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node functional-808300 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node functional-808300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node functional-808300 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m16s                  kubelet          Node functional-808300 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node functional-808300 event: Registered Node functional-808300 in Controller
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node functional-808300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m14s)  kubelet          Node functional-808300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m14s)  kubelet          Node functional-808300 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           117s                   node-controller  Node functional-808300 event: Registered Node functional-808300 in Controller
	
	
	==> dmesg <==
	[  +5.197223] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.672335] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +5.521238] systemd-fstab-generator[1708]: Ignoring "noauto" option for root device
	[  +0.107978] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.031421] systemd-fstab-generator[2120]: Ignoring "noauto" option for root device
	[  +0.119187] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	
	
	==> etcd [83b5eb4ecd28] <==
	{"level":"info","ts":"2024-06-03T12:46:34.195661Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T12:46:34.195677Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T12:46:34.196111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 switched to configuration voters=(13434646322387826258)"}
	{"level":"info","ts":"2024-06-03T12:46:34.196196Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"93df14db82a4c103","local-member-id":"ba7173a680ef3652","added-peer-id":"ba7173a680ef3652","added-peer-peer-urls":["https://172.22.146.164:2380"]}
	{"level":"info","ts":"2024-06-03T12:46:34.196292Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"93df14db82a4c103","local-member-id":"ba7173a680ef3652","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:46:34.208425Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T12:46:34.202805Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T12:46:34.21485Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ba7173a680ef3652","initial-advertise-peer-urls":["https://172.22.146.164:2380"],"listen-peer-urls":["https://172.22.146.164:2380"],"advertise-client-urls":["https://172.22.146.164:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.146.164:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T12:46:34.214899Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T12:46:34.202848Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.146.164:2380"}
	{"level":"info","ts":"2024-06-03T12:46:34.214955Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.146.164:2380"}
	{"level":"info","ts":"2024-06-03T12:46:35.718439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T12:46:35.71854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T12:46:35.718599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 received MsgPreVoteResp from ba7173a680ef3652 at term 2"}
	{"level":"info","ts":"2024-06-03T12:46:35.718614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T12:46:35.718647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 received MsgVoteResp from ba7173a680ef3652 at term 3"}
	{"level":"info","ts":"2024-06-03T12:46:35.718665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba7173a680ef3652 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T12:46:35.718693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba7173a680ef3652 elected leader ba7173a680ef3652 at term 3"}
	{"level":"info","ts":"2024-06-03T12:46:35.728037Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ba7173a680ef3652","local-member-attributes":"{Name:functional-808300 ClientURLs:[https://172.22.146.164:2379]}","request-path":"/0/members/ba7173a680ef3652/attributes","cluster-id":"93df14db82a4c103","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T12:46:35.728043Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:46:35.728685Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T12:46:35.728726Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T12:46:35.7282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T12:46:35.731284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T12:46:35.731345Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.146.164:2379"}
	
	
	==> etcd [83c451953493] <==
	
	
	==> kernel <==
	 12:48:46 up 6 min,  0 users,  load average: 0.16, 0.33, 0.17
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f3d2239938b] <==
	I0603 12:46:37.389480       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 12:46:37.389801       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 12:46:37.403833       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 12:46:37.407319       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 12:46:37.407593       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 12:46:37.407635       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 12:46:37.408269       1 aggregator.go:165] initial CRD sync complete...
	I0603 12:46:37.408426       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 12:46:37.408536       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 12:46:37.408566       1 cache.go:39] Caches are synced for autoregister controller
	I0603 12:46:37.412666       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 12:46:37.413548       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0603 12:46:37.422491       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 12:46:37.422890       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 12:46:37.425443       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 12:46:37.425617       1 policy_source.go:224] refreshing policies
	I0603 12:46:37.470551       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 12:46:38.193595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 12:46:39.145347       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 12:46:39.188317       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 12:46:39.301232       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 12:46:39.362289       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 12:46:39.372757       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 12:46:49.659113       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 12:46:49.692110       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [577e1c60911f] <==
	
	
	==> kube-controller-manager [65d6796adbfb] <==
	
	
	==> kube-controller-manager [dcdcc621dd5c] <==
	I0603 12:46:49.674913       1 shared_informer.go:320] Caches are synced for GC
	I0603 12:46:49.679173       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 12:46:49.692504       1 shared_informer.go:320] Caches are synced for service account
	I0603 12:46:49.697187       1 shared_informer.go:320] Caches are synced for node
	I0603 12:46:49.697254       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 12:46:49.697294       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 12:46:49.697315       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 12:46:49.697322       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 12:46:49.698639       1 shared_informer.go:320] Caches are synced for expand
	I0603 12:46:49.706519       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 12:46:49.710138       1 shared_informer.go:320] Caches are synced for job
	I0603 12:46:49.712573       1 shared_informer.go:320] Caches are synced for deployment
	I0603 12:46:49.747786       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 12:46:49.766332       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 12:46:49.776103       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 12:46:49.799785       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 12:46:49.851878       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 12:46:49.858554       1 shared_informer.go:320] Caches are synced for taint
	I0603 12:46:49.859042       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 12:46:49.859201       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-808300"
	I0603 12:46:49.859451       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 12:46:49.892434       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 12:46:50.329148       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 12:46:50.334568       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 12:46:50.334889       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2fe782b70629] <==
	
	
	==> kube-proxy [f452cbb26875] <==
	I0603 12:46:38.624063       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:46:38.660908       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.146.164"]
	I0603 12:46:38.709577       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:46:38.709625       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:46:38.709644       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:46:38.712961       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:46:38.713593       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:46:38.713971       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:46:38.715335       1 config.go:192] "Starting service config controller"
	I0603 12:46:38.715840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:46:38.715999       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:46:38.716105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:46:38.716987       1 config.go:319] "Starting node config controller"
	I0603 12:46:38.717027       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:46:38.817036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:46:38.817059       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:46:38.817435       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02843dfe5169] <==
	I0603 12:46:29.258263       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [75f43b1538ea] <==
	I0603 12:46:37.331757       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 12:46:37.340647       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0603 12:46:37.358704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:46:37.358892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:46:37.359427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:46:37.362745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:46:37.362489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:46:37.362809       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:46:37.362552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:46:37.362830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:46:37.362638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:46:37.362851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:46:37.362683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:46:37.362883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:46:37.362732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:46:37.362897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:46:37.363781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:46:37.363824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:46:37.363927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:46:37.363961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:46:37.363974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:46:37.363982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:46:37.364919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:46:37.364962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 12:46:37.440959       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.451807    5393 kubelet_node_status.go:76] "Successfully registered node" node="functional-808300"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.453644    5393 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.455187    5393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.756757    5393 apiserver.go:52] "Watching apiserver"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.761137    5393 topology_manager.go:215] "Topology Admit Handler" podUID="9d2a4b61-760c-48da-96bf-18224b420ecc" podNamespace="kube-system" podName="kube-proxy-66ngx"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.761607    5393 topology_manager.go:215] "Topology Admit Handler" podUID="2127dc1b-897b-4fd8-9d36-4f67c5018a98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-42cp7"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.761833    5393 topology_manager.go:215] "Topology Admit Handler" podUID="770d8091-cdaf-4c5d-83e4-b93c973a520e" podNamespace="kube-system" podName="storage-provisioner"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.766845    5393 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.782515    5393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/770d8091-cdaf-4c5d-83e4-b93c973a520e-tmp\") pod \"storage-provisioner\" (UID: \"770d8091-cdaf-4c5d-83e4-b93c973a520e\") " pod="kube-system/storage-provisioner"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.782715    5393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d2a4b61-760c-48da-96bf-18224b420ecc-xtables-lock\") pod \"kube-proxy-66ngx\" (UID: \"9d2a4b61-760c-48da-96bf-18224b420ecc\") " pod="kube-system/kube-proxy-66ngx"
	Jun 03 12:46:37 functional-808300 kubelet[5393]: I0603 12:46:37.782816    5393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d2a4b61-760c-48da-96bf-18224b420ecc-lib-modules\") pod \"kube-proxy-66ngx\" (UID: \"9d2a4b61-760c-48da-96bf-18224b420ecc\") " pod="kube-system/kube-proxy-66ngx"
	Jun 03 12:46:38 functional-808300 kubelet[5393]: I0603 12:46:38.063014    5393 scope.go:117] "RemoveContainer" containerID="eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d"
	Jun 03 12:46:38 functional-808300 kubelet[5393]: I0603 12:46:38.063426    5393 scope.go:117] "RemoveContainer" containerID="2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428"
	Jun 03 12:46:38 functional-808300 kubelet[5393]: I0603 12:46:38.064124    5393 scope.go:117] "RemoveContainer" containerID="c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b"
	Jun 03 12:46:42 functional-808300 kubelet[5393]: I0603 12:46:42.685028    5393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 03 12:47:32 functional-808300 kubelet[5393]: E0603 12:47:32.930137    5393 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:47:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:47:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:47:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:47:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:48:32 functional-808300 kubelet[5393]: E0603 12:48:32.927205    5393 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:48:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:48:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:48:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:48:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [be000e19e002] <==
	I0603 12:46:38.503802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:46:38.546317       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:46:38.546361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:46:55.982642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:46:55.983199       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-808300_2359ba89-4290-40a4-97e5-d72570570364!
	I0603 12:46:55.984802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"605aec1c-a59d-4b62-8b08-789bb374f7de", APIVersion:"v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-808300_2359ba89-4290-40a4-97e5-d72570570364 became leader
	I0603 12:46:56.084311       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-808300_2359ba89-4290-40a4-97e5-d72570570364!
	
	
	==> storage-provisioner [eade14c1c5b6] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:48:38.493029    9676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: (11.9290594s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-808300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (33.81s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (282.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0603 12:50:14.723334   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-808300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m29.7949216s)

                                                
                                                
-- stdout --
	* [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	* Updating the running hyperv "functional-808300" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:49:00.155544    1732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-808300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m30.0304164s for "functional-808300" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
E0603 12:51:37.917155   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (11.9282619s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:51:30.196566    3388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (1m48.5250056s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:39 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-397300                                                         | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:41 UTC |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:44 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:44 UTC | 03 Jun 24 12:47 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                              |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache delete                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	| ssh     | functional-808300 ssh sudo                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-808300                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache reload                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	| ssh     | functional-808300 ssh                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-808300 kubectl --                                             | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | --context functional-808300                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:49 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165'"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:52:29 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:52:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 12:52:30 functional-808300 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 03 12:52:30 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:52:30 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T12:52:32Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 12:53:30 up 11 min,  0 users,  load average: 0.06, 0.19, 0.16
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.250394    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?resourceVersion=0&timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.251128    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.251922    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.252927    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.253972    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:53:25 functional-808300 kubelet[5393]: E0603 12:53:25.254062    5393 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 03 12:53:26 functional-808300 kubelet[5393]: E0603 12:53:26.838372    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f800ade178c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:20.826580876 +0000 UTC m=+228.200994568,LastTimestamp:2024-06-03 12:50:20.826580876 +0000 UTC m=+228.200994568,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +
0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 12:53:29 functional-808300 kubelet[5393]: E0603 12:53:29.183796    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 12:53:29 functional-808300 kubelet[5393]: E0603 12:53:29.843784    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m12.009613975s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.301453    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.301508    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.301526    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308476    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308545    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: I0603 12:53:30.308559    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308624    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308671    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308693    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.308927    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.309543    5393 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.311315    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.311401    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.312523    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.312556    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 12:53:30 functional-808300 kubelet[5393]: E0603 12:53:30.313142    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:51:42.117564    7556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 12:52:29.966389    7556 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:29.999799    7556 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.027050    7556 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.057766    7556 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.087328    7556 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.117306    7556 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.143981    7556 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:52:30.170981    7556 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.0423923s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:53:31.008907   15024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (282.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (180.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-808300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-808300 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1887729s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-808300 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (11.8103258s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:53:45.249451    6656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
E0603 12:55:14.722878   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (2m34.3133284s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:39 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:39 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:40 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-397300 --log_dir                                                  | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:40 UTC | 03 Jun 24 12:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-397300                                                         | nospam-397300     | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:41 UTC |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:41 UTC | 03 Jun 24 12:44 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:44 UTC | 03 Jun 24 12:47 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                              |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache delete                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | minikube-local-cache-test:functional-808300                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	| ssh     | functional-808300 ssh sudo                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-808300                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache reload                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	| ssh     | functional-808300 ssh                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-808300 kubectl --                                             | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | --context functional-808300                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:49 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:55:30 functional-808300 cri-dockerd[4143]: time="2024-06-03T12:55:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T12:55:30Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 12:56:31 up 14 min,  0 users,  load average: 0.03, 0.11, 0.12
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 12:56:24 functional-808300 kubelet[5393]: E0603 12:56:24.254309    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 12:56:24 functional-808300 kubelet[5393]: E0603 12:56:24.874372    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m7.040198267s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.449015    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?resourceVersion=0&timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.449617    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.450470    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.451537    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.452680    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 12:56:28 functional-808300 kubelet[5393]: E0603 12:56:28.452806    5393 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 03 12:56:29 functional-808300 kubelet[5393]: E0603 12:56:29.875284    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m12.041091256s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.033709    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.034413    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.037114    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.037201    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.038143    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.038774    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.038879    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.038895    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.039048    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.039424    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.039243    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.040189    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: I0603 12:56:31.039794    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.039371    5393 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.039755    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 12:56:31 functional-808300 kubelet[5393]: E0603 12:56:31.040835    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:53:57.051339   15088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 12:54:30.533674   15088 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.567258   15088 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.594871   15088 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.621882   15088 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.647897   15088 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.676870   15088 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:54:30.704598   15088 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 12:55:30.810545   15088 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (11.8066391s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:56:31.717092   14000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (180.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-808300 apply -f testdata\invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-808300 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2205915s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.22.146.164:8441/openapi/v2?timeout=32s": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-808300 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config unset cpus" to be -""- but got *"W0603 13:03:37.431234    4412 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 config get cpus: exit status 14 (232.9882ms)

                                                
                                                
** stderr ** 
	W0603 13:03:37.713120    8588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0603 13:03:37.713120    8588 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0603 13:03:37.953577    9604 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -""- but got *"W0603 13:03:38.198143    9176 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config unset cpus" to be -""- but got *"W0603 13:03:38.428359    4260 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 config get cpus: exit status 14 (225.9643ms)

                                                
                                                
** stderr ** 
	W0603 13:03:38.671178    3172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0603 13:03:38.671178    3172 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (302.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 status: exit status 2 (11.9184232s)

                                                
                                                
-- stdout --
	functional-808300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:06:46.893138    2280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:852: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-808300 status" : exit status 2
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (11.9142634s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:06:58.806154    6744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-808300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 status -o json: exit status 2 (11.8741522s)

                                                
                                                
-- stdout --
	{"Name":"functional-808300","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:07:10.718768    7556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-808300 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (11.8993707s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:07:22.595833    1452 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (4m0.9234404s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| config  | functional-808300 config unset                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp functional-808300:/home/docker/cp-test.txt                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:04 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt |                   |                   |         |                     |                     |
	| service | functional-808300                                                                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp                                                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| license |                                                                                                     | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	| ssh     | functional-808300 ssh echo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | hello                                                                                               |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh cat                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh sudo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:06 UTC |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:06 UTC | 03 Jun 24 13:07 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:10:36Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 13:11:35 up 29 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:11:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.596614    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.807929    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81dea98cbd  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://172.22.146.164:8441/livez\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,LastTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.0
49288249,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.808111    5393 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81dea98cbd  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://172.22.146.164:8441/livez\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,LastTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-8
08300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.809648    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-808300.17d57f81d4a04596\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81d4a04596  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.22.146.164:8441/readyz\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.506494358 +0000 UTC m=+235.880908150,LastTimes
tamp:2024-06-03 12:50:28.819543899 +0000 UTC m=+236.193957591,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910072    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910229    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910151    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911376    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911584    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911644    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911831    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.912041    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: I0603 13:11:34.912612    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910072    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.916065    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911517    5393 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911592    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.917451    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.917567    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.918239    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 03 13:11:35 functional-808300 kubelet[5393]: E0603 13:11:35.050387    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m17.215236285s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:07:34.504912    2940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 13:08:34.082021    2940 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:08:34.141600    2940 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:08:34.187906    2940 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.357077    2940 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.401035    2940 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.447390    2940 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.589244    2940 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.642470    2940 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.8322811s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:11:36.176414    9456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (302.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (187.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-808300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-808300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1911139s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.22.146.164:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-808300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-808300 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-808300 describe po hello-node-connect: exit status 1 (2.2097056s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-808300 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-808300 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-808300 logs -l app=hello-node-connect: exit status 1 (2.167746s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-808300 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-808300 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-808300 describe svc hello-node-connect: exit status 1 (2.1882888s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-808300 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (12.7433424s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:03:48.483880    9876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (2m32.7876933s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | delete                                                                                              | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | registry.k8s.io/pause:3.3                                                                           |                   |                   |         |                     |                     |
	| cache   | list                                                                                                | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	| ssh     | functional-808300 ssh sudo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | crictl images                                                                                       |                   |                   |         |                     |                     |
	| ssh     | functional-808300                                                                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC | 03 Jun 24 12:47 UTC |
	|         | ssh sudo docker rmi                                                                                 |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh                                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache reload                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	| ssh     | functional-808300 ssh                                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache   | delete                                                                                              | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:3.1                                                                           |                   |                   |         |                     |                     |
	| cache   | delete                                                                                              | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| kubectl | functional-808300 kubectl --                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:48 UTC | 03 Jun 24 12:48 UTC |
	|         | --context functional-808300                                                                         |                   |                   |         |                     |                     |
	|         | get pods                                                                                            |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:49 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|         | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| config  | functional-808300 config unset                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp                                                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-808300 config set                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-808300 config unset                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp functional-808300:/home/docker/cp-test.txt                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 13:05:33 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 13:05:33 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:05:33 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:05:33Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:05:35Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 13:06:33 up 24 min,  0 users,  load average: 0.00, 0.00, 0.05
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:06:31 functional-808300 kubelet[5393]: E0603 13:06:31.358434    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 13:06:31 functional-808300 kubelet[5393]: E0603 13:06:31.358535    5393 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 03 13:06:32 functional-808300 kubelet[5393]: E0603 13:06:32.241503    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81d4a04596  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.22.146.164:8441/readyz\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.506494358 +0000 UTC m=+235.880908150,LastTimestamp:2024-06-03 12:50:28.506494358 +0000 UTC m=+235
.880908150,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:06:32 functional-808300 kubelet[5393]: I0603 13:06:32.896928    5393 status_manager.go:853] "Failed to get status for pod" podUID="11918179ce61499bb08bfc780760a360" pod="kube-system/kube-apiserver-functional-808300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 13:06:32 functional-808300 kubelet[5393]: E0603 13:06:32.925508    5393 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:06:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:06:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:06:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:06:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.482941    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.597113    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.597296    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.599220    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.599641    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: I0603 13:06:33.599686    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.599983    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.600129    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.600632    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.600819    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.600896    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.602196    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.603603    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.603971    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.604049    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:06:33 functional-808300 kubelet[5393]: E0603 13:06:33.604338    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:04:01.209593    7260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 13:04:33.073335    7260 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:04:33.109675    7260 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:04:33.141418    7260 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:04:33.173524    7260 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:04:33.204915    7260 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:05:33.329540    7260 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:05:33.361975    7260 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:05:33.401859    7260 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.5444539s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:06:34.349840    2248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (187.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (491.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
E0603 13:05:14.732512   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.22.146.164:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (11.9462652s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:07:37.437889    7392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (11.7620831s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:07:49.364542    6820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
E0603 13:08:17.934270   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (3m34.31061s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| config  | functional-808300 config unset                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp functional-808300:/home/docker/cp-test.txt                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:04 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt |                   |                   |         |                     |                     |
	| service | functional-808300                                                                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp                                                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| license |                                                                                                     | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	| ssh     | functional-808300 ssh echo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | hello                                                                                               |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh cat                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh sudo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:06 UTC |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:06 UTC | 03 Jun 24 13:07 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="error getting RW layer size for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:10:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:10:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181'"
	Jun 03 13:10:34 functional-808300 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jun 03 13:10:34 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:10:34 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:10:36Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +13.935296] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.285231] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 13:11:35 up 29 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:11:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:11:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.596614    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.807929    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81dea98cbd  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://172.22.146.164:8441/livez\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,LastTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.0
49288249,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.808111    5393 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81dea98cbd  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://172.22.146.164:8441/livez\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,LastTimestamp:2024-06-03 12:50:28.674874557 +0000 UTC m=+236.049288249,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-8
08300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.809648    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-808300.17d57f81d4a04596\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81d4a04596  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.22.146.164:8441/readyz\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.506494358 +0000 UTC m=+235.880908150,LastTimes
tamp:2024-06-03 12:50:28.819543899 +0000 UTC m=+236.193957591,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910072    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910229    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910151    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911376    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911584    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911644    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911831    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.912041    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: I0603 13:11:34.912612    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.910072    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.916065    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911517    5393 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.911592    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.917451    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.917567    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:11:34 functional-808300 kubelet[5393]: E0603 13:11:34.918239    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 03 13:11:35 functional-808300 kubelet[5393]: E0603 13:11:35.050387    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m17.215236285s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:08:01.135694    2184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 13:08:34.083854    2184 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:08:34.150983    2184 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:08:34.183652    2184 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.365063    2184 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.419980    2184 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.586686    2184 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.640240    2184 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.684055    2184 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.7372045s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:11:36.174407   15100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (491.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (230.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-808300 replace --force -f testdata\mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-808300 replace --force -f testdata\mysql.yaml: exit status 1 (4.2304613s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.22.146.164:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.22.146.164:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-808300 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (12.2946363s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:13:02.536378    9168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (3m21.8433956s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| tunnel         | functional-808300 tunnel                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| tunnel         | functional-808300 tunnel                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-808300 image load --daemon                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:05 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-808300                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-808300 image ls                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:06 UTC |
	| image          | functional-808300 image load --daemon                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:06 UTC | 03 Jun 24 13:07 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-808300                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-808300 image ls                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:07 UTC | 03 Jun 24 13:08 UTC |
	| image          | functional-808300 image load --daemon                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:08 UTC | 03 Jun 24 13:09 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-808300                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-808300 image ls                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:09 UTC | 03 Jun 24 13:10 UTC |
	| image          | functional-808300 image save                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:10 UTC | 03 Jun 24 13:11 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-808300                |                   |                   |         |                     |                     |
	|                | C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| image          | functional-808300 image rm                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-808300                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |                   |         |                     |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|                | /etc/ssl/certs/10544.pem                                                |                   |                   |         |                     |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:12 UTC |
	|                | /usr/share/ca-certificates/10544.pem                                    |                   |                   |         |                     |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC | 03 Jun 24 13:12 UTC |
	|                | /etc/ssl/certs/51391683.0                                               |                   |                   |         |                     |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC | 03 Jun 24 13:12 UTC |
	|                | /etc/ssl/certs/105442.pem                                               |                   |                   |         |                     |                     |
	| start          | -p functional-808300                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC |                     |
	|                | --dry-run --memory                                                      |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                                 |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                                         |                   |                   |         |                     |                     |
	| docker-env     | functional-808300 docker-env                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC | 03 Jun 24 13:12 UTC |
	|                | /usr/share/ca-certificates/105442.pem                                   |                   |                   |         |                     |                     |
	| image          | functional-808300 image ls                                              | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC | 03 Jun 24 13:12 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |                   |         |                     |                     |
	| start          | -p functional-808300                                                    | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC |                     |
	|                | --dry-run --memory                                                      |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                                 |                   |                   |         |                     |                     |
	|                | --driver=hyperv                                                         |                   |                   |         |                     |                     |
	| ssh            | functional-808300 ssh sudo cat                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC | 03 Jun 24 13:12 UTC |
	|                | /etc/test/nested/copy/10544/hosts                                       |                   |                   |         |                     |                     |
	| dashboard      | --url --port 36195                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:12 UTC |                     |
	|                | -p functional-808300                                                    |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                                                  |                   |                   |         |                     |                     |
	| update-context | functional-808300                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:13 UTC | 03 Jun 24 13:13 UTC |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	| update-context | functional-808300                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:13 UTC | 03 Jun 24 13:13 UTC |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	| update-context | functional-808300                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:13 UTC |                     |
	|                | update-context                                                          |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |                   |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:12:47
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:12:47.615753   14472 out.go:291] Setting OutFile to fd 1160 ...
	I0603 13:12:47.616801   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:47.616801   14472 out.go:304] Setting ErrFile to fd 1088...
	I0603 13:12:47.616801   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:47.638177   14472 out.go:298] Setting JSON to false
	I0603 13:12:47.641582   14472 start.go:129] hostinfo: {"hostname":"minikube3","uptime":21296,"bootTime":1717399071,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 13:12:47.641747   14472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 13:12:47.645620   14472 out.go:177] * [functional-808300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 13:12:47.648860   14472 notify.go:220] Checking for updates...
	I0603 13:12:47.651690   14472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:12:47.654234   14472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:12:47.657304   14472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 13:12:47.659891   14472 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:12:47.662074   14472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'eade14c1c5b68d71c1e8c6f2a27d27e6e6125b8a2fff7d7e9e148c8ed2e70b7d'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="error getting RW layer size for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495'"
	Jun 03 13:15:35 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:15:35Z" level=error msg="Set backoffDuration to : 1m0s for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca'"
	Jun 03 13:15:36 functional-808300 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jun 03 13:15:36 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:15:36 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:15:38Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	[Jun 3 13:12] systemd-fstab-generator[14338]: Ignoring "noauto" option for root device
	[  +0.862634] systemd-fstab-generator[14364]: Ignoring "noauto" option for root device
	[Jun 3 13:16] systemd-fstab-generator[15610]: Ignoring "noauto" option for root device
	[  +0.130450] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> kernel <==
	 13:16:36 up 34 min,  0 users,  load average: 0.14, 0.07, 0.03
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:16:32 functional-808300 kubelet[5393]: E0603 13:16:32.469799    5393 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-808300\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 13:16:32 functional-808300 kubelet[5393]: E0603 13:16:32.469874    5393 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jun 03 13:16:32 functional-808300 kubelet[5393]: I0603 13:16:32.897245    5393 status_manager.go:853] "Failed to get status for pod" podUID="11918179ce61499bb08bfc780760a360" pod="kube-system/kube-apiserver-functional-808300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300\": dial tcp 172.22.146.164:8441: connect: connection refused"
	Jun 03 13:16:32 functional-808300 kubelet[5393]: E0603 13:16:32.926678    5393 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:16:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:16:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:16:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:16:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:16:35 functional-808300 kubelet[5393]: E0603 13:16:35.111979    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m17.277800807s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jun 03 13:16:35 functional-808300 kubelet[5393]: E0603 13:16:35.707082    5393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-808300?timeout=10s\": dial tcp 172.22.146.164:8441: connect: connection refused" interval="7s"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.170815    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171412    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171294    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171506    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: I0603 13:16:36.171556    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.170703    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171610    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171343    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171634    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171650    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171365    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.171674    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.172506    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.172778    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:16:36 functional-808300 kubelet[5393]: E0603 13:16:36.172989    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:13:14.831724   13616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 13:13:35.533328   13616 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.45/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-apiserver%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E0603 13:14:35.667448   13616 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:14:35.705891   13616 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:14:35.749525   13616 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:15:35.888744   13616 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:15:35.933026   13616 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:15:35.972605   13616 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:15:36.005618   13616 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.3000486s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:16:36.689905    6436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (230.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (241.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-808300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-808300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.1845339s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-808300 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: exit status 2 (11.6412175s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:08:49.904263   10740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (3m34.3086042s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| config  | functional-808300 config get                                                                        | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	| addons  | functional-808300 addons list                                                                       | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:03 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service list                                                                      | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp functional-808300:/home/docker/cp-test.txt                                     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:03 UTC | 03 Jun 24 13:04 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt |                   |                   |         |                     |                     |
	| service | functional-808300                                                                                   | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-808300 service                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| cp      | functional-808300 cp                                                                                | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh -n                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | functional-808300 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| license |                                                                                                     | minikube          | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	| ssh     | functional-808300 ssh echo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | hello                                                                                               |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh cat                                                                           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC | 03 Jun 24 13:04 UTC |
	|         | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh sudo                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:04 UTC |                     |
	|         | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| tunnel  | functional-808300 tunnel                                                                            | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:05 UTC | 03 Jun 24 13:06 UTC |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:06 UTC | 03 Jun 24 13:07 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls                                                                          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:07 UTC | 03 Jun 24 13:08 UTC |
	| image   | functional-808300 image load --daemon                                                               | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:08 UTC |                     |
	|         | gcr.io/google-containers/addon-resizer:functional-808300                                            |                   |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:49:00
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:49:00.235842    1732 out.go:291] Setting OutFile to fd 840 ...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.236577    1732 out.go:304] Setting ErrFile to fd 616...
	I0603 12:49:00.236577    1732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:49:00.261282    1732 out.go:298] Setting JSON to false
	I0603 12:49:00.264282    1732 start.go:129] hostinfo: {"hostname":"minikube3","uptime":19868,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:49:00.264282    1732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:49:00.270409    1732 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:49:00.274641    1732 notify.go:220] Checking for updates...
	I0603 12:49:00.276699    1732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:49:00.278693    1732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:49:00.281652    1732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:49:00.284648    1732 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:49:00.286651    1732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:49:00.291036    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:00.291858    1732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:49:05.570980    1732 out.go:177] * Using the hyperv driver based on existing profile
	I0603 12:49:05.575724    1732 start.go:297] selected driver: hyperv
	I0603 12:49:05.575724    1732 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.575724    1732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:49:05.626806    1732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:49:05.626806    1732 cni.go:84] Creating CNI manager for ""
	I0603 12:49:05.626806    1732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:49:05.626806    1732 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-808300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.146.164 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:49:05.626806    1732 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:49:05.633624    1732 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0603 12:49:05.636635    1732 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:49:05.637158    1732 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:49:05.637158    1732 cache.go:56] Caching tarball of preloaded images
	I0603 12:49:05.637684    1732 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 12:49:05.637751    1732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 12:49:05.637751    1732 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0603 12:49:05.640967    1732 start.go:360] acquireMachinesLock for functional-808300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:49:05.640967    1732 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0603 12:49:05.640967    1732 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:49:05.640967    1732 fix.go:54] fixHost starting: 
	I0603 12:49:05.641715    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:08.415782    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:08.415782    1732 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0603 12:49:08.416795    1732 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:49:08.420899    1732 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0603 12:49:08.423508    1732 machine.go:94] provisionDockerMachine start ...
	I0603 12:49:08.423582    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:10.712165    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:13.253487    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:13.254503    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:13.260432    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:13.261482    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:13.261482    1732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:49:13.399057    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:13.399210    1732 buildroot.go:166] provisioning hostname "functional-808300"
	I0603 12:49:13.399210    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:15.541436    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:15.541675    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:18.074512    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:18.080673    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:18.081341    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:18.081341    1732 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0603 12:49:18.249098    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0603 12:49:18.249098    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:20.352120    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:20.352282    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:20.352356    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:22.898474    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:22.905033    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:22.905583    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:22.905583    1732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:49:23.038156    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:23.038156    1732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 12:49:23.038286    1732 buildroot.go:174] setting up certificates
	I0603 12:49:23.038286    1732 provision.go:84] configureAuth start
	I0603 12:49:23.038368    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:25.168408    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:27.735183    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:27.736187    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:29.872286    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:32.410109    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:32.410109    1732 provision.go:143] copyHostCerts
	I0603 12:49:32.410879    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 12:49:32.410879    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 12:49:32.411331    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 12:49:32.412635    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 12:49:32.412635    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 12:49:32.412996    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 12:49:32.414198    1732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 12:49:32.414198    1732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 12:49:32.414545    1732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 12:49:32.415610    1732 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.22.146.164 functional-808300 localhost minikube]
	I0603 12:49:32.712767    1732 provision.go:177] copyRemoteCerts
	I0603 12:49:32.724764    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:49:32.724764    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:34.837128    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:34.837856    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:37.375330    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:37.375559    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:37.480771    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7559241s)
	I0603 12:49:37.480826    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:49:37.528205    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 12:49:37.578459    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:49:37.627279    1732 provision.go:87] duration metric: took 14.5888698s to configureAuth
	I0603 12:49:37.627279    1732 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:49:37.628273    1732 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 12:49:37.628273    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:39.750715    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:39.750894    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:42.248163    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:42.253817    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:42.254350    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:42.254350    1732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 12:49:42.390315    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 12:49:42.390315    1732 buildroot.go:70] root file system type: tmpfs
	I0603 12:49:42.390486    1732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 12:49:42.390577    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:44.488308    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:47.015306    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:47.020999    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:47.020999    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:47.021566    1732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 12:49:47.189720    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 12:49:47.189902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:49.328254    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:51.842444    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:51.842685    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:51.847410    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:49:51.848026    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:49:51.848136    1732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 12:49:52.002270    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:49:52.002270    1732 machine.go:97] duration metric: took 43.5783954s to provisionDockerMachine
	I0603 12:49:52.002270    1732 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0603 12:49:52.002270    1732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:49:52.014902    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:49:52.014902    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:54.129644    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:54.129780    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:49:56.657058    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:56.657058    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:49:56.769087    1732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.754029s)
	I0603 12:49:56.782600    1732 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:49:56.789695    1732 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:49:56.789695    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 12:49:56.790223    1732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 12:49:56.790944    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 12:49:56.791808    1732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts -> hosts in /etc/test/nested/copy/10544
	I0603 12:49:56.804680    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10544
	I0603 12:49:56.825546    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 12:49:56.870114    1732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts --> /etc/test/nested/copy/10544/hosts (40 bytes)
	I0603 12:49:56.918755    1732 start.go:296] duration metric: took 4.9164445s for postStartSetup
	I0603 12:49:56.918830    1732 fix.go:56] duration metric: took 51.2774317s for fixHost
	I0603 12:49:56.918830    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:49:59.043954    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:01.610237    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:01.616356    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:01.616925    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:01.616925    1732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:50:01.754458    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419001.765759569
	
	I0603 12:50:01.754458    1732 fix.go:216] guest clock: 1717419001.765759569
	I0603 12:50:01.754999    1732 fix.go:229] Guest: 2024-06-03 12:50:01.765759569 +0000 UTC Remote: 2024-06-03 12:49:56.9188301 +0000 UTC m=+56.849473901 (delta=4.846929469s)
	I0603 12:50:01.755117    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:03.919135    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:06.434824    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:06.441287    1732 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:06.441474    1732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.164 22 <nil> <nil>}
	I0603 12:50:06.441474    1732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717419001
	I0603 12:50:06.585742    1732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 12:50:01 UTC 2024
	
	I0603 12:50:06.585742    1732 fix.go:236] clock set: Mon Jun  3 12:50:01 UTC 2024
	 (err=<nil>)
	I0603 12:50:06.585742    1732 start.go:83] releasing machines lock for "functional-808300", held for 1m0.9442633s
	I0603 12:50:06.586483    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:08.723911    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:11.280358    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:11.286940    1732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:50:11.287127    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:11.297353    1732 ssh_runner.go:195] Run: cat /version.json
	I0603 12:50:11.297353    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.490806    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 12:50:13.526365    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:13.526449    1732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.184971    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.185280    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stdout =====>] : 172.22.146.164
	
	I0603 12:50:16.202281    1732 main.go:141] libmachine: [stderr =====>] : 
	I0603 12:50:16.203074    1732 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0603 12:50:16.291651    1732 ssh_runner.go:235] Completed: cat /version.json: (4.9942561s)
	I0603 12:50:16.306274    1732 ssh_runner.go:195] Run: systemctl --version
	I0603 12:50:16.355391    1732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0675511s)
	I0603 12:50:16.366636    1732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:50:16.375691    1732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:50:16.388090    1732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:50:16.405978    1732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:50:16.405978    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:16.405978    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:16.453816    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 12:50:16.485596    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 12:50:16.503969    1732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 12:50:16.517971    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 12:50:16.549156    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.581312    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 12:50:16.612775    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 12:50:16.647414    1732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:50:16.678358    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 12:50:16.708418    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 12:50:16.743475    1732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 12:50:16.776832    1732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:50:16.806324    1732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:50:16.840166    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:17.096238    1732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 12:50:17.129261    1732 start.go:494] detecting cgroup driver to use...
	I0603 12:50:17.142588    1732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 12:50:17.178015    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.214526    1732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:50:17.282409    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:50:17.322016    1732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 12:50:17.346060    1732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:50:17.394003    1732 ssh_runner.go:195] Run: which cri-dockerd
	I0603 12:50:17.411821    1732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 12:50:17.430017    1732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 12:50:17.478608    1732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 12:50:17.759911    1732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 12:50:18.009777    1732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 12:50:18.009777    1732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 12:50:18.055298    1732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:50:18.318935    1732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 12:51:29.680979    1732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3613501s)
	I0603 12:51:29.693407    1732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0603 12:51:29.782469    1732 out.go:177] 
	W0603 12:51:29.786096    1732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 03 12:43:24 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.628866122Z" level=info msg="Starting up"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.630311181Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:24 functional-808300 dockerd[673]: time="2024-06-03T12:43:24.634028433Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.661523756Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685876251Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.685936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686065059Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686231965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686317369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686429774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686588180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686671783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686689684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686701185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.686787688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.687222106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689704107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689791211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.689905315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690003819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690236329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690393535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.690500340Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716000481Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716245191Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716277293Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716304794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716324495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716446300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716794814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.716969021Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717114327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717181530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717203130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717218631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717231232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717245932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717260533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717272933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717285134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717297434Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717327536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717348336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717362137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717375337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717387738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717400138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717412139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717424939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717439040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717453441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717465841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717477642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717489642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717504543Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717524444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717538544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717550045Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717602747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717628148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717640148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717652149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717663249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717675450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717686050Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.717990963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718194271Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718615288Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:24 functional-808300 dockerd[679]: time="2024-06-03T12:43:24.718715492Z" level=info msg="containerd successfully booted in 0.058473s"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.702473456Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:25 functional-808300 dockerd[673]: time="2024-06-03T12:43:25.735688127Z" level=info msg="Loading containers: start."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.010503637Z" level=info msg="Loading containers: done."
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031232026Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.031421030Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.159563851Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:26 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:26 functional-808300 dockerd[673]: time="2024-06-03T12:43:26.161009285Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:43:56 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.687463640Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.689959945Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690215845Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690324445Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:43:56 functional-808300 dockerd[673]: time="2024-06-03T12:43:56.690369545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:43:57 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:43:57 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:43:57 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.780438278Z" level=info msg="Starting up"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.781801780Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:43:57 functional-808300 dockerd[1027]: time="2024-06-03T12:43:57.787716190Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1033
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.819821447Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846310594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846401094Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846519995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846539495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846563695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846575995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846924995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846964595Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.846992395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847016696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.847167896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.849934901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850031601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850168801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850259101Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850291801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850310501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850321201Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850561202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850705702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850744702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850771602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850787202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.850831302Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851085603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851156303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851172503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851184203Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851196303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851208703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851219903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851231903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851245403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851257303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851269103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851295403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851313103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851325103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851341303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851354003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851367703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851379503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851390703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851401803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851413403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851426003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851437203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851447803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851458203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851471403Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851491803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851503303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851513904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851549004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851658104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851678204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851698604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851709004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851720604Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.851734804Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852115105Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852376705Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852445905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:43:57 functional-808300 dockerd[1033]: time="2024-06-03T12:43:57.852489705Z" level=info msg="containerd successfully booted in 0.033698s"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.828570435Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:43:58 functional-808300 dockerd[1027]: time="2024-06-03T12:43:58.851038275Z" level=info msg="Loading containers: start."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.026943787Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.118964350Z" level=info msg="Loading containers: done."
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141485490Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.141680390Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.197188889Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:43:59 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:43:59 functional-808300 dockerd[1027]: time="2024-06-03T12:43:59.198903592Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.853372506Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.854600708Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855309009Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855465609Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:44:08 functional-808300 dockerd[1027]: time="2024-06-03T12:44:08.855498609Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:44:08 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:44:09 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:44:09 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.931457417Z" level=info msg="Starting up"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.932516719Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:44:09 functional-808300 dockerd[1328]: time="2024-06-03T12:44:09.934127421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1334
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.966766979Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992224024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992259224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992358425Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992394325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992420125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992436425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992562225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992696325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992729425Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992741025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992765125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.992867525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996464532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996565532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996738732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996823633Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996855433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996872533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.996882433Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997062833Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997113833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997130833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997144433Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997157233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997203633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997453534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997578234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997614934Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997663134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997678134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997689934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997700634Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997715034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997729234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997740634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997752034Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997762234Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997779734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997792334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997804134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997815434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997826234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997837534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997884934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997921334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997937534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997948435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997958635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997969935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.997987135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998006735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998018335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998028535Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998087335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998102835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998113035Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998125435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998134935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998146935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998156235Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998467335Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998587736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998680736Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:44:09 functional-808300 dockerd[1334]: time="2024-06-03T12:44:09.998717236Z" level=info msg="containerd successfully booted in 0.033704s"
	Jun 03 12:44:10 functional-808300 dockerd[1328]: time="2024-06-03T12:44:10.979375074Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:44:13 functional-808300 dockerd[1328]: time="2024-06-03T12:44:13.979794393Z" level=info msg="Loading containers: start."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.166761224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.246745866Z" level=info msg="Loading containers: done."
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275542917Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.275794717Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318299593Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:44:14 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:44:14 functional-808300 dockerd[1328]: time="2024-06-03T12:44:14.318416693Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481193033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.481300231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.482452008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.483163794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555242697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555441293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.555463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.556420474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641567724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641688622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.641972616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.642377908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696408761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.696920551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697026749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.697598738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.923771454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.925833014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926097609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.926698097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975113159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975335655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.975440053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:21 functional-808300 dockerd[1334]: time="2024-06-03T12:44:21.976007342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079922031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.079992130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080044229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.080177726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127553471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.127864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.128102061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:22 functional-808300 dockerd[1334]: time="2024-06-03T12:44:22.134911038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534039591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534739189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.534993488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:42 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.535448286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:42.999922775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001555370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:43 functional-808300 dockerd[1334]: time="2024-06-03T12:44:43.001896169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.574212998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575391194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.575730993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:44 functional-808300 dockerd[1334]: time="2024-06-03T12:44:44.576013792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119735326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119816834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.119850737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:45 functional-808300 dockerd[1334]: time="2024-06-03T12:44:45.120575802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591893357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.591995665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.592819829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.866872994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867043707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867059308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:44:50 functional-808300 dockerd[1334]: time="2024-06-03T12:44:50.867176618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:11 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.320707911Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.530075506Z" level=info msg="ignoring event" container=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530863111Z" level=info msg="shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530934512Z" level=warning msg="cleaning up after shim disconnected" id=96a2f05f22306fd34137aab928b4fc5befe9906e5814d9189f062d0f5d065419 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.530947812Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548201118Z" level=info msg="shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548262819Z" level=warning msg="cleaning up after shim disconnected" id=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.548275819Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.548926923Z" level=info msg="ignoring event" container=e4a3d1aad706ea31a3c91963f858433991f34be43bb610c4ee07bca14ffd98b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.555005761Z" level=info msg="ignoring event" container=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555226762Z" level=info msg="shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555637564Z" level=warning msg="cleaning up after shim disconnected" id=68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.555871866Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571443362Z" level=info msg="shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571642763Z" level=info msg="ignoring event" container=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571688564Z" level=info msg="ignoring event" container=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571715264Z" level=info msg="ignoring event" container=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.571729764Z" level=info msg="ignoring event" container=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583600637Z" level=warning msg="cleaning up after shim disconnected" id=9d93705fdb4a880b6f62829c01c54f8fb92d505968b51153af5d76787eb1fdcc namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.583651738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571922365Z" level=info msg="shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602203453Z" level=warning msg="cleaning up after shim disconnected" id=2189bdf4fdf5a58f7b772f240d4f329ca3418ca5dabf18ea70d3e646d7eb5fd9 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.602215153Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.605428672Z" level=info msg="shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605570873Z" level=info msg="ignoring event" container=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605648174Z" level=info msg="ignoring event" container=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605689174Z" level=info msg="ignoring event" container=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.605708174Z" level=info msg="ignoring event" container=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616825743Z" level=info msg="shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619069757Z" level=warning msg="cleaning up after shim disconnected" id=455f2c45f2644270fdb5801b446a96974ce3dc5017eb92addd0592396ed9fae3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.619081657Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571968865Z" level=info msg="shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.622950981Z" level=warning msg="cleaning up after shim disconnected" id=04d2064bec327beb1f7e3a48212e53625c364cb347e44fdd25d93379f2f767b3 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.623019281Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616768943Z" level=info msg="shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649220943Z" level=warning msg="cleaning up after shim disconnected" id=27708ce50b045526985c23a68b6ec5de46d742c5410f35f023413c2591f3f532 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649232743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649593346Z" level=warning msg="cleaning up after shim disconnected" id=edfe17d226ba72d719f49b58654727437ab5d4dfed90c30633c65c38c79e5e3d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.649632646Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.616798243Z" level=info msg="shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660353412Z" level=warning msg="cleaning up after shim disconnected" id=1dccd16bf407a6ce2b27e92415ceb1943911351945ffa5d4d9d62a154971ff17 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.660613314Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.571948565Z" level=info msg="shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661857022Z" level=warning msg="cleaning up after shim disconnected" id=d92f2286f410ddd228e9c328ade62a9fe12480756c5355affd1440bf5f5c2be8 namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.661869022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.701730868Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.789945914Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1328]: time="2024-06-03T12:46:11.800700381Z" level=info msg="ignoring event" container=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802193190Z" level=info msg="shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802687893Z" level=warning msg="cleaning up after shim disconnected" id=99e6936fbfd38bbe5b8d895396a2c59c6375300a6751676db21ad920ec91a17d namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.802957394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:11 functional-808300 dockerd[1334]: time="2024-06-03T12:46:11.865834983Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1328]: time="2024-06-03T12:46:16.426781600Z" level=info msg="ignoring event" container=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429021313Z" level=info msg="shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429197714Z" level=warning msg="cleaning up after shim disconnected" id=c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.429215515Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:16 functional-808300 dockerd[1334]: time="2024-06-03T12:46:16.461057012Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.432071476Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.471179469Z" level=info msg="ignoring event" container=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471301366Z" level=info msg="shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471394963Z" level=warning msg="cleaning up after shim disconnected" id=23fd19559e8795167da13464dce5762864dc5bae39232bfddc84b4fae9708c54 namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1334]: time="2024-06-03T12:46:21.471408762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.533991230Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534869803Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.534996499Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:46:21 functional-808300 dockerd[1328]: time="2024-06-03T12:46:21.535310690Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:46:22 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:46:22 functional-808300 systemd[1]: docker.service: Consumed 4.876s CPU time.
	Jun 03 12:46:22 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.610929688Z" level=info msg="Starting up"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.611865461Z" level=info msg="containerd not running, starting managed containerd"
	Jun 03 12:46:22 functional-808300 dockerd[3911]: time="2024-06-03T12:46:22.613136725Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=3917
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.646536071Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670247194Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670360391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670450088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670483087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670506787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670539786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670840677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670938074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670960374Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670972073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.670998073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.671139469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674461374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.674583370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675060557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675230152Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675269851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675297750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675312250Z" level=info msg="metadata content store policy set" policy=shared
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675642440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675701438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675746437Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675788936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675843034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.675898433Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677513487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677902676Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.677984973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678005973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678019272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678033372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678045471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678074771Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678087670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678099470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678111970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678122369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678141069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678165268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678179068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678190967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678201767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678212967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678223666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678234666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678245966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678259765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678270865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678281565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678298864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678314564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678506758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678611555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678628755Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.678700553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679040743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679084142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679118541Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679144240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679155740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679165739Z" level=info msg="NRI interface is disabled by configuration."
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679517929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679766922Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679827521Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 03 12:46:22 functional-808300 dockerd[3917]: time="2024-06-03T12:46:22.679865720Z" level=info msg="containerd successfully booted in 0.035745s"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.663212880Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.695980015Z" level=info msg="Loading containers: start."
	Jun 03 12:46:23 functional-808300 dockerd[3911]: time="2024-06-03T12:46:23.961510211Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.046062971Z" level=info msg="Loading containers: done."
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.075922544Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.076129939Z" level=info msg="Daemon has completed initialization"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124525761Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.124901652Z" level=info msg="API listen on [::]:2376"
	Jun 03 12:46:24 functional-808300 systemd[1]: Started Docker Application Container Engine.
	Jun 03 12:46:24 functional-808300 dockerd[3911]: time="2024-06-03T12:46:24.231994444Z" level=error msg="Handler for GET /v1.44/containers/68532ac6c504345a23783add3b0bb8ea8c4a487b4fa23bc0d657427129626ffd/json returned error: write unix /var/run/docker.sock->@: write: broken pipe" spanID=326af23131ec94a7 traceID=8803c53e169299942225f4075fc21de5
	Jun 03 12:46:24 functional-808300 dockerd[3911]: 2024/06/03 12:46:24 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772084063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772274159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.772357358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.775252298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945246488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945323086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.945406685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:26 functional-808300 dockerd[3917]: time="2024-06-03T12:46:26.950967170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029005105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029349598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.029863988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.030264081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039564104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039688602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039761901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.039928798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226303462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226586457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.226751953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.227086747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347252567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347436764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347474363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.347654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.441905572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442046969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442209966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.442589559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.635985990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636416182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.636608978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.637648558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.848060467Z" level=info msg="ignoring event" container=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851167708Z" level=info msg="shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851742597Z" level=warning msg="cleaning up after shim disconnected" id=5d6e5cc420d9639383fea95503133c6708a3d2ddc9925ba7584d3ed5a298c8f2 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.851821695Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.861031421Z" level=info msg="ignoring event" container=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.864043064Z" level=info msg="shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.865018845Z" level=info msg="ignoring event" container=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866029226Z" level=warning msg="cleaning up after shim disconnected" id=ce20c4c25d1810db55b65e9418315d386a729b3e560c5fb659dd6b49e2b7eca4 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866146324Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.865866429Z" level=info msg="shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866559616Z" level=warning msg="cleaning up after shim disconnected" id=75af9fb73dddf7c7ec7cbd659c2c7d50f7f842b01ebd37e5cb0b7c1ceb9c46df namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.866626315Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.874086573Z" level=info msg="ignoring event" container=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3911]: time="2024-06-03T12:46:27.875139053Z" level=info msg="ignoring event" container=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879726666Z" level=info msg="shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.883291398Z" level=warning msg="cleaning up after shim disconnected" id=69c1d2f0cb64c822f5511e123fe5c58aa248c3a845a20883655a580affe8ea26 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.879810365Z" level=info msg="shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886134245Z" level=warning msg="cleaning up after shim disconnected" id=86b73cfdf66cf96c47e9c9063c5f91b94bc732ff4ea5cb9f7791f71463c6d3d0 namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.886413939Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:27 functional-808300 dockerd[3917]: time="2024-06-03T12:46:27.884961767Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.005534788Z" level=info msg="ignoring event" container=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007078361Z" level=info msg="shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007356756Z" level=warning msg="cleaning up after shim disconnected" id=eb74516b16cf4a2263078224fc5f703c5b02058c1b053241acc95254cc626715 namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.007522453Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.117025348Z" level=warning msg="cleanup warnings time=\"2024-06-03T12:46:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3911]: time="2024-06-03T12:46:28.487894595Z" level=info msg="ignoring event" container=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.489713764Z" level=info msg="shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490041558Z" level=warning msg="cleaning up after shim disconnected" id=155addeb6f57b06cca1763d12fd750d09bb486aeec90c259a05c5965d2f149ef namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.490061758Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.915977147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916565637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916679435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:28 functional-808300 dockerd[3917]: time="2024-06-03T12:46:28.916848732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.031752879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032666665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.032798863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.033668649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:29 functional-808300 dockerd[3911]: time="2024-06-03T12:46:29.861712863Z" level=info msg="ignoring event" container=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863639332Z" level=info msg="shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863797430Z" level=warning msg="cleaning up after shim disconnected" id=02843dfe5169fa16f362f3cceec7796819d6e784524c41dd06fcaf521341b165 namespace=moby
	Jun 03 12:46:29 functional-808300 dockerd[3917]: time="2024-06-03T12:46:29.863862329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194045838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194125737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194139737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.194288235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.324621840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326281415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326470813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.326978105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424497687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.424951381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447077459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.447586651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531075037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531171736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531184436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.531290034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542348873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542475071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542490771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:30 functional-808300 dockerd[3917]: time="2024-06-03T12:46:30.542581970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554547048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554615849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554645449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.554819849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595679596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595829096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.595871096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.596066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615722419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615775719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615802019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.615963419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619500423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619605123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619619223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:33 functional-808300 dockerd[3917]: time="2024-06-03T12:46:33.619740523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.362279071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.364954075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365043476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365060876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.365137676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363853574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363885474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.363981074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401163732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401199732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:46:38 functional-808300 dockerd[3917]: time="2024-06-03T12:46:38.401348832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:48:46 functional-808300 dockerd[3911]: 2024/06/03 12:48:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 12:50:18 functional-808300 systemd[1]: Stopping Docker Application Container Engine...
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.355659920Z" level=info msg="Processing signal 'terminated'"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.500564779Z" level=info msg="ignoring event" container=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.502392091Z" level=info msg="shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505257410Z" level=warning msg="cleaning up after shim disconnected" id=c5bda73a137959daad223c375702161ae6c804a66cd7055bec4a500611e80a33 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.505505012Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.559469469Z" level=info msg="ignoring event" container=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562029186Z" level=info msg="shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562079586Z" level=warning msg="cleaning up after shim disconnected" id=e13d219adabb0fac47478c6dcb6933d23a25124e7749eed0eac8db2be4e60ea2 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.562089586Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.565925812Z" level=info msg="ignoring event" container=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566150213Z" level=info msg="shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566239014Z" level=warning msg="cleaning up after shim disconnected" id=0d1392b7a58699c349f5338496eecaf537e3e4aeb40f9d59ee4c7b07877f07b0 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.566294014Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.568666030Z" level=info msg="ignoring event" container=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568889531Z" level=info msg="shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568944532Z" level=warning msg="cleaning up after shim disconnected" id=f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.568956532Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.591020678Z" level=info msg="ignoring event" container=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591289280Z" level=info msg="shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591381680Z" level=warning msg="cleaning up after shim disconnected" id=2c63105d6657d8c9104349850b705e4ed6f6c2d9210e9064ccd08eb229140ae4 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.591394180Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.601843549Z" level=info msg="shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602416253Z" level=info msg="ignoring event" container=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602469454Z" level=info msg="ignoring event" container=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.602501354Z" level=info msg="ignoring event" container=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602446653Z" level=warning msg="cleaning up after shim disconnected" id=dc04e828659641a49946793e98c105718da28b0021b782bdb52dfd0565934d43 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.602625555Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608358493Z" level=info msg="shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608420693Z" level=warning msg="cleaning up after shim disconnected" id=dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.608435393Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622700688Z" level=info msg="shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622837388Z" level=warning msg="cleaning up after shim disconnected" id=75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.622919789Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651705580Z" level=info msg="shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651827580Z" level=warning msg="cleaning up after shim disconnected" id=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.651840680Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653814394Z" level=info msg="ignoring event" container=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.653869794Z" level=info msg="ignoring event" container=8a2a7c2d993dfee2ad7caeddda06880996a1f61e55aae97e610d0a48ab8a5859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656537812Z" level=info msg="shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656607912Z" level=warning msg="cleaning up after shim disconnected" id=21d1a639c77e5ef536e1d8740cb4559d5f10fd8b20d845ed2cfbad73681ce7b9 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.656638212Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689247628Z" level=info msg="shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689349429Z" level=warning msg="cleaning up after shim disconnected" id=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.689362229Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.689544230Z" level=info msg="ignoring event" container=be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3911]: time="2024-06-03T12:50:18.776260304Z" level=info msg="ignoring event" container=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.781705240Z" level=info msg="shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782034342Z" level=warning msg="cleaning up after shim disconnected" id=83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf namespace=moby
	Jun 03 12:50:18 functional-808300 dockerd[3917]: time="2024-06-03T12:50:18.782163743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.471467983Z" level=info msg="shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472291989Z" level=warning msg="cleaning up after shim disconnected" id=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3917]: time="2024-06-03T12:50:23.472355489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:23 functional-808300 dockerd[3911]: time="2024-06-03T12:50:23.473084794Z" level=info msg="ignoring event" container=1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.462170568Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.522259595Z" level=info msg="ignoring event" container=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524322178Z" level=info msg="shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524549387Z" level=warning msg="cleaning up after shim disconnected" id=1f3d2239938b2e98f6e5689791f40d29c11c8ce79fb7aecb46a4b7e234ce0181 namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3917]: time="2024-06-03T12:50:28.524566388Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.585453246Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586244178Z" level=info msg="Daemon shutdown complete"
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586390484Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 03 12:50:28 functional-808300 dockerd[3911]: time="2024-06-03T12:50:28.586415685Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Deactivated successfully.
	Jun 03 12:50:29 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 12:50:29 functional-808300 systemd[1]: docker.service: Consumed 9.808s CPU time.
	Jun 03 12:50:29 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	Jun 03 12:50:29 functional-808300 dockerd[7943]: time="2024-06-03T12:50:29.663260817Z" level=info msg="Starting up"
	Jun 03 12:51:29 functional-808300 dockerd[7943]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 03 12:51:29 functional-808300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 03 12:51:29 functional-808300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0603 12:51:29.786899    1732 out.go:239] * 
	W0603 12:51:29.788963    1732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:51:29.789078    1732 out.go:177] 
	
	
	==> Docker <==
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be000e19e002b69c910e131fbca96c99d37f71b0ab801ea87711eb9e8eb8f495'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83b5eb4ecd28f2f920bc2e85770667f002bcb71dc24a351868ea2aa2c9c6a8cf'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2fe782b706294a2d93b0559df9e80e9f143e2efb4671d4d008ab64cb9a273428'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dcdcc621dd5c602bdecb19c20b29e9bb6bcdddb0616320684d75c82f58313908'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '65d6796adbfbe3360cd160233835da1a640ba771d612938d84f25cb4c624f37c'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1ff0e8444e017cc602970a4ca118d3c893e98ac8f0ad20c7778879fea1c078cc'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f452cbb2687597501ddb3f7803708a567fbcb59fe58cd30042e0d7fb54ef532b'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '75f43b1538ea88b6b3e7c83f114893a9d171908ccbea84a502048073a7e01dca'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '577e1c60911fab9d3d2fddda9d240e63b968bdbbf7e6d821bf5804058c99d79f'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '83c4519534936b47943633e71982d66fc9000d357e821416c54d98a1d728b210'"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="error getting RW layer size for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:11:34 functional-808300 cri-dockerd[4143]: time="2024-06-03T13:11:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c4fb3a7c664e666ebf2a0fb73ba020fb1090e1addec8e36c83691509959a775b'"
	Jun 03 13:11:35 functional-808300 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jun 03 13:11:35 functional-808300 systemd[1]: Stopped Docker Application Container Engine.
	Jun 03 13:11:35 functional-808300 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T13:11:37Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +7.968672] kauditd_printk_skb: 71 callbacks suppressed
	[Jun 3 12:46] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +0.669802] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	[  +0.254078] systemd-fstab-generator[3494]: Ignoring "noauto" option for root device
	[  +0.299244] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +5.308659] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.948638] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.206903] systemd-fstab-generator[4116]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[4131]: Ignoring "noauto" option for root device
	[  +0.830261] systemd-fstab-generator[4289]: Ignoring "noauto" option for root device
	[  +0.959896] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.613475] systemd-fstab-generator[5386]: Ignoring "noauto" option for root device
	[  +0.142828] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.855368] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.262421] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.413051] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[Jun 3 12:50] systemd-fstab-generator[7480]: Ignoring "noauto" option for root device
	[  +0.143757] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.490699] systemd-fstab-generator[7516]: Ignoring "noauto" option for root device
	[  +0.290075] systemd-fstab-generator[7529]: Ignoring "noauto" option for root device
	[  +0.285138] systemd-fstab-generator[7542]: Ignoring "noauto" option for root device
	[  +5.306666] kauditd_printk_skb: 89 callbacks suppressed
	[Jun 3 13:12] systemd-fstab-generator[14338]: Ignoring "noauto" option for root device
	[  +0.862634] systemd-fstab-generator[14364]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 13:12:35 up 30 min,  0 users,  load average: 0.00, 0.00, 0.03
	Linux functional-808300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:12:32 functional-808300 kubelet[5393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:12:32 functional-808300 kubelet[5393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:12:32 functional-808300 kubelet[5393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:12:32 functional-808300 kubelet[5393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:12:33 functional-808300 kubelet[5393]: E0603 13:12:33.234525    5393 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-808300.17d57f81d4a04596\": dial tcp 172.22.146.164:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-808300.17d57f81d4a04596  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-808300,UID:11918179ce61499bb08bfc780760a360,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.22.146.164:8441/readyz\": dial tcp 172.22.146.164:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-808300,},FirstTimestamp:2024-06-03 12:50:28.506494358 +0000 UTC m=+235.880908150,LastTimes
tamp:2024-06-03 12:50:28.819543899 +0000 UTC m=+236.193957591,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-808300,}"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.065157    5393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 22m17.230973856s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.234276    5393 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.234387    5393 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.234442    5393 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.236593    5393 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.236856    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.237502    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: I0603 13:12:35.237594    5393 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.237448    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.236674    5393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.237662    5393 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.236922    5393 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.237863    5393 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: I0603 13:12:35.238250    5393 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.240635    5393 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.237474    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.243615    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.245301    5393 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.245448    5393 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jun 03 13:12:35 functional-808300 kubelet[5393]: E0603 13:12:35.246383    5393 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:09:01.539532    2932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 13:09:34.364878    2932 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:09:34.434239    2932 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.586659    2932 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.648845    2932 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:10:34.677672    2932 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:11:34.898232    2932 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:11:34.948230    2932 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0603 13:11:34.993132    2932 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: exit status 2 (12.5920043s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:12:36.226168   12100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-808300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (241.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-808300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-808300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.2113713s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.22.146.164:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-808300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (7.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service list: exit status 103 (7.4834847s)

                                                
                                                
-- stdout --
	* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:03:39.659219    5952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1457: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-808300 service list" : exit status 103
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-808300 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-808300\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (7.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service list -o json: exit status 103 (7.6702649s)

                                                
                                                
-- stdout --
	* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:03:47.126052    8216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1487: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-808300 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node: exit status 103 (7.6599644s)

                                                
                                                
-- stdout --
	* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:03:54.781551    3600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}: exit status 103 (7.7424462s)

                                                
                                                
-- stdout --
	* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:04:02.453895    8856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1544: "* The control-plane node functional-808300 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-808300\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url: exit status 103 (7.4237621s)

                                                
                                                
-- stdout --
	* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:04:10.201152   10632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url": exit status 103
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-808300 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-808300"
functional_test.go:1565: failed to parse "* The control-plane node functional-808300 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-808300\"": parse "* The control-plane node functional-808300 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-808300\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: W0603 13:05:03.033240    7308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:05:03.135005    7308 out.go:291] Setting OutFile to fd 1016 ...
I0603 13:05:03.152380    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:05:03.152463    7308 out.go:304] Setting ErrFile to fd 636...
I0603 13:05:03.152463    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:05:03.180252    7308 mustload.go:65] Loading cluster: functional-808300
I0603 13:05:03.181284    7308 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:05:03.182767    7308 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:05:05.391678    7308 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:05:05.391678    7308 main.go:141] libmachine: [stderr =====>] : 
I0603 13:05:05.391678    7308 host.go:66] Checking if "functional-808300" exists ...
I0603 13:05:05.392944    7308 api_server.go:166] Checking apiserver status ...
I0603 13:05:05.407527    7308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0603 13:05:05.407594    7308 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:05:07.641863    7308 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:05:07.641863    7308 main.go:141] libmachine: [stderr =====>] : 
I0603 13:05:07.642943    7308 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:05:10.292516    7308 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:05:10.292516    7308 main.go:141] libmachine: [stderr =====>] : 
I0603 13:05:10.292865    7308 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:05:10.414623    7308 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.006987s)
W0603 13:05:10.414623    7308 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0603 13:05:10.418972    7308 out.go:177] * The control-plane node functional-808300 apiserver is not running: (state=Stopped)
I0603 13:05:10.422535    7308 out.go:177]   To start a cluster, run: "minikube start -p functional-808300"

                                                
                                                
stdout: * The control-plane node functional-808300 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-808300"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11368: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] stdout:
* The control-plane node functional-808300 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-808300"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-808300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-808300 apply -f testdata\testsvc.yaml: exit status 1 (4.2526181s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.22.146.164:8441/openapi/v2?timeout=32s": dial tcp 172.22.146.164:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-808300 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (59.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr: (59.9643561s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr:
W0603 13:14:36.065507   10820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:14:36.162297   10820 out.go:291] Setting OutFile to fd 784 ...
I0603 13:14:36.163048   10820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:14:36.163048   10820 out.go:304] Setting ErrFile to fd 1056...
I0603 13:14:36.163048   10820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:14:36.184616   10820 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:14:36.185916   10820 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:14:36.187148   10820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:14:38.499369   10820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:14:38.499369   10820 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:38.515623   10820 ssh_runner.go:195] Run: systemctl --version
I0603 13:14:38.515623   10820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:14:40.776640   10820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:14:40.776897   10820 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:40.776897   10820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:14:43.368536   10820 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:14:43.368626   10820 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:43.368798   10820 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:14:43.472682   10820 ssh_runner.go:235] Completed: systemctl --version: (4.957018s)
I0603 13:14:43.484039   10820 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0603 13:15:35.880163   10820 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.3956896s)
W0603 13:15:35.880445   10820 cache_images.go:715] Failed to list images for profile functional-808300 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (59.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr: (1m0.2606948s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr:
W0603 13:15:36.043605   10900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:15:36.130624   10900 out.go:291] Setting OutFile to fd 1048 ...
I0603 13:15:36.131675   10900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:15:36.131675   10900 out.go:304] Setting ErrFile to fd 960...
I0603 13:15:36.131675   10900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:15:36.164117   10900 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:15:36.165100   10900 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:15:36.166097   10900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:15:38.450264   10900 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:15:38.450264   10900 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:38.467226   10900 ssh_runner.go:195] Run: systemctl --version
I0603 13:15:38.467226   10900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:15:40.731981   10900 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:15:40.731981   10900 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:40.731981   10900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:15:43.413852   10900 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:15:43.413898   10900 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:43.414114   10900 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:15:43.514084   10900 ssh_runner.go:235] Completed: systemctl --version: (5.0466554s)
I0603 13:15:43.525178   10900 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0603 13:16:36.161204   10900 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.6346376s)
W0603 13:16:36.161204   10900 cache_images.go:715] Failed to list images for profile functional-808300 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (60.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr: (1m0.2644996s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr:
W0603 13:15:36.035612   14564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:15:36.130624   14564 out.go:291] Setting OutFile to fd 748 ...
I0603 13:15:36.131675   14564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:15:36.131675   14564 out.go:304] Setting ErrFile to fd 784...
I0603 13:15:36.131675   14564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:15:36.150125   14564 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:15:36.150125   14564 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:15:36.151097   14564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:15:38.450696   14564 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:15:38.450772   14564 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:38.466288   14564 ssh_runner.go:195] Run: systemctl --version
I0603 13:15:38.466288   14564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:15:40.710311   14564 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:15:40.710601   14564 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:40.710601   14564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:15:43.394503   14564 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:15:43.394716   14564 main.go:141] libmachine: [stderr =====>] : 
I0603 13:15:43.394716   14564 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:15:43.498137   14564 ssh_runner.go:235] Completed: systemctl --version: (5.0318068s)
I0603 13:15:43.508748   14564 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0603 13:16:36.160252   14564 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.6510668s)
W0603 13:16:36.160252   14564 cache_images.go:715] Failed to list images for profile functional-808300 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (60.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (59.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr
E0603 13:15:14.736462   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr: (59.9902927s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr:
W0603 13:14:36.067495    7596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:14:36.164437    7596 out.go:291] Setting OutFile to fd 1176 ...
I0603 13:14:36.184616    7596 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:14:36.184616    7596 out.go:304] Setting ErrFile to fd 780...
I0603 13:14:36.184616    7596 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:14:36.205458    7596 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:14:36.205458    7596 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:14:36.206450    7596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:14:38.546784    7596 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:14:38.546784    7596 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:38.563314    7596 ssh_runner.go:195] Run: systemctl --version
I0603 13:14:38.564166    7596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:14:40.852282    7596 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:14:40.852282    7596 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:40.852282    7596 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:14:43.497305    7596 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:14:43.497305    7596 main.go:141] libmachine: [stderr =====>] : 
I0603 13:14:43.497305    7596 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:14:43.597911    7596 ssh_runner.go:235] Completed: systemctl --version: (5.0345552s)
I0603 13:14:43.607964    7596 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0603 13:15:35.886303    7596 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.2779045s)
W0603 13:15:35.886303    7596 cache_images.go:715] Failed to list images for profile functional-808300 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (59.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh pgrep buildkitd: exit status 1 (9.6382604s)

                                                
                                                
** stderr ** 
	W0603 13:16:36.292576   11396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr: (50.8267632s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr:
W0603 13:16:45.921118    9140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0603 13:16:46.008767    9140 out.go:291] Setting OutFile to fd 636 ...
I0603 13:16:46.027348    9140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:16:46.027348    9140 out.go:304] Setting ErrFile to fd 1108...
I0603 13:16:46.027348    9140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 13:16:46.041793    9140 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:16:46.063946    9140 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0603 13:16:46.064975    9140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:16:48.269338    9140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:16:48.269338    9140 main.go:141] libmachine: [stderr =====>] : 
I0603 13:16:48.281757    9140 ssh_runner.go:195] Run: systemctl --version
I0603 13:16:48.281757    9140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0603 13:16:50.426786    9140 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0603 13:16:50.426786    9140 main.go:141] libmachine: [stderr =====>] : 
I0603 13:16:50.426926    9140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0603 13:16:52.933088    9140 main.go:141] libmachine: [stdout =====>] : 172.22.146.164

                                                
                                                
I0603 13:16:52.933088    9140 main.go:141] libmachine: [stderr =====>] : 
I0603 13:16:52.933637    9140 sshutil.go:53] new ssh client: &{IP:172.22.146.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0603 13:16:53.033461    9140 ssh_runner.go:235] Completed: systemctl --version: (4.7516113s)
I0603 13:16:53.033555    9140 build_images.go:161] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.1292129197.tar
I0603 13:16:53.046283    9140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0603 13:16:53.079309    9140 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1292129197.tar
I0603 13:16:53.086876    9140 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1292129197.tar: stat -c "%s %y" /var/lib/minikube/build/build.1292129197.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1292129197.tar': No such file or directory
I0603 13:16:53.087010    9140 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.1292129197.tar --> /var/lib/minikube/build/build.1292129197.tar (3072 bytes)
I0603 13:16:53.144367    9140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1292129197
I0603 13:16:53.171018    9140 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1292129197 -xf /var/lib/minikube/build/build.1292129197.tar
I0603 13:16:53.187088    9140 docker.go:360] Building image: /var/lib/minikube/build/build.1292129197
I0603 13:16:53.197696    9140 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-808300 /var/lib/minikube/build/build.1292129197
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0603 13:17:36.605871    9140 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-808300 /var/lib/minikube/build/build.1292129197: (43.4077054s)
W0603 13:17:36.606090    9140 build_images.go:125] Failed to build image for profile functional-808300. make sure the profile is running. Docker build /var/lib/minikube/build/build.1292129197.tar: buildimage docker: docker build -t localhost/my-image:functional-808300 /var/lib/minikube/build/build.1292129197: Process exited with status 1
stdout:

                                                
                                                
stderr:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0603 13:17:36.606161    9140 build_images.go:133] succeeded building to: 
I0603 13:17:36.606252    9140 build_images.go:134] failed building to: functional-808300
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (1m0.2352695s)
functional_test.go:442: expected "localhost/my-image:functional-808300" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (74.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (14.7762865s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (1m0.1921779s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-808300" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (74.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (1m0.2605379s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (1m0.2341758s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-808300" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.4761883s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-808300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (56.5800265s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
E0603 13:10:14.735413   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (1m0.1969662s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-808300" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image save gcr.io/google-containers/addon-resizer:functional-808300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image save gcr.io/google-containers/addon-resizer:functional-808300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (1m0.3321345s)
functional_test.go:385: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (432.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-808300"
functional_test.go:495: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-808300": exit status 1 (7m12.572182s)

                                                
                                                
** stderr ** 
	W0603 13:12:26.474089    7184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_docker-env_1e51fd752a804983ed180295403359f1417a1165_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0603 13:19:37.048225    7184 out.go:190] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:498: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-808300"
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (432.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: exit status 80 (388.8925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:13:35.662888    1580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 13:13:35.743665    1580 out.go:291] Setting OutFile to fd 1052 ...
	I0603 13:13:35.760240    1580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:35.760240    1580 out.go:304] Setting ErrFile to fd 1228...
	I0603 13:13:35.760308    1580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:35.774681    1580 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:13:35.774681    1580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0603 13:13:35.894695    1580 cache.go:107] acquiring lock: {Name:mk9fa608b7858d6532a3d3d43d0c0843297964ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:35.897992    1580 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 122.2527ms
	I0603 13:13:35.903283    1580 out.go:177] 
	W0603 13:13:35.905585    1580 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0603 13:13:35.905741    1580 out.go:239] * 
	* 
	W0603 13:13:35.925481    1580 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_image_0b4c5fab104c183061db191397cc3c0143dc95a5_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_image_0b4c5fab104c183061db191397cc3c0143dc95a5_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:13:35.928486    1580 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:410: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:13:35.662888    1580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 13:13:35.743665    1580 out.go:291] Setting OutFile to fd 1052 ...
	I0603 13:13:35.760240    1580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:35.760240    1580 out.go:304] Setting ErrFile to fd 1228...
	I0603 13:13:35.760308    1580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:35.774681    1580 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:13:35.774681    1580 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	I0603 13:13:35.894695    1580 cache.go:107] acquiring lock: {Name:mk9fa608b7858d6532a3d3d43d0c0843297964ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:35.897992    1580 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" -> "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar" took 122.2527ms
	I0603 13:13:35.903283    1580 out.go:177] 
	W0603 13:13:35.905585    1580 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\addon-resizer-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar
	W0603 13:13:35.905741    1580 out.go:239] * 
	* 
	W0603 13:13:35.925481    1580 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_image_0b4c5fab104c183061db191397cc3c0143dc95a5_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_image_0b4c5fab104c183061db191397cc3c0143dc95a5_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:13:35.928486    1580 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- sh -c "ping -c 1 172.22.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- sh -c "ping -c 1 172.22.144.1": exit status 1 (10.5259816s)

                                                
                                                
-- stdout --
	PING 172.22.144.1 (172.22.144.1): 56 data bytes
	
	--- 172.22.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:34:53.318324    1356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.22.144.1) from pod (busybox-fc5497c4f-4hfj7): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- sh -c "ping -c 1 172.22.144.1"
E0603 13:35:14.749053   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- sh -c "ping -c 1 172.22.144.1": exit status 1 (10.5026024s)

                                                
                                                
-- stdout --
	PING 172.22.144.1 (172.22.144.1): 56 data bytes
	
	--- 172.22.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:35:04.361752   10864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.22.144.1) from pod (busybox-fc5497c4f-fkkts): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- sh -c "ping -c 1 172.22.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- sh -c "ping -c 1 172.22.144.1": exit status 1 (10.5346803s)

                                                
                                                
-- stdout --
	PING 172.22.144.1 (172.22.144.1): 56 data bytes
	
	--- 172.22.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:35:15.388373   12852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.22.144.1) from pod (busybox-fc5497c4f-vzbnc): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-149700 -n ha-149700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-149700 -n ha-149700: (12.4558622s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 logs -n 25: (8.9703878s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-808300 ssh pgrep          | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:16 UTC |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-808300 image build -t     | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:16 UTC | 03 Jun 24 13:17 UTC |
	|         | localhost/my-image:functional-808300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls           | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:17 UTC | 03 Jun 24 13:18 UTC |
	| delete  | -p functional-808300                 | functional-808300 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:21 UTC | 03 Jun 24 13:22 UTC |
	| start   | -p ha-149700 --wait=true             | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:22 UTC | 03 Jun 24 13:34 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- apply -f             | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- rollout status       | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- get pods -o          | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- get pods -o          | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-4hfj7 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-fkkts --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-vzbnc --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-4hfj7 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-fkkts --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-vzbnc --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-4hfj7 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-fkkts -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-vzbnc -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- get pods -o          | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | busybox-fc5497c4f-4hfj7              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:34 UTC |                     |
	|         | busybox-fc5497c4f-4hfj7 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.22.144.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:35 UTC |
	|         | busybox-fc5497c4f-fkkts              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:35 UTC |                     |
	|         | busybox-fc5497c4f-fkkts -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.22.144.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:35 UTC |
	|         | busybox-fc5497c4f-vzbnc              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-149700 -- exec                 | ha-149700         | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:35 UTC |                     |
	|         | busybox-fc5497c4f-vzbnc -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.22.144.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:22:56
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:22:56.971779   15052 out.go:291] Setting OutFile to fd 1132 ...
	I0603 13:22:56.972464   15052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:22:56.972464   15052 out.go:304] Setting ErrFile to fd 960...
	I0603 13:22:56.972464   15052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:22:56.997789   15052 out.go:298] Setting JSON to false
	I0603 13:22:57.000819   15052 start.go:129] hostinfo: {"hostname":"minikube3","uptime":21905,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 13:22:57.000819   15052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 13:22:57.005553   15052 out.go:177] * [ha-149700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 13:22:57.012713   15052 notify.go:220] Checking for updates...
	I0603 13:22:57.014937   15052 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:22:57.017495   15052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:22:57.020235   15052 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 13:22:57.022881   15052 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:22:57.025391   15052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:22:57.028824   15052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:23:02.588214   15052 out.go:177] * Using the hyperv driver based on user configuration
	I0603 13:23:02.592073   15052 start.go:297] selected driver: hyperv
	I0603 13:23:02.592073   15052 start.go:901] validating driver "hyperv" against <nil>
	I0603 13:23:02.592073   15052 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:23:02.645291   15052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:23:02.646831   15052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:23:02.646905   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:23:02.646997   15052 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 13:23:02.646997   15052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 13:23:02.647201   15052 start.go:340] cluster config:
	{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:23:02.647557   15052 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:23:02.651359   15052 out.go:177] * Starting "ha-149700" primary control-plane node in "ha-149700" cluster
	I0603 13:23:02.655235   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:23:02.655540   15052 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 13:23:02.655603   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:23:02.656037   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:23:02.656195   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:23:02.656854   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:23:02.657015   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json: {Name:mk8cf1b94df5066df9477edea2b9709544c10d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:23:02.657680   15052 start.go:360] acquireMachinesLock for ha-149700: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:23:02.658290   15052 start.go:364] duration metric: took 609.4µs to acquireMachinesLock for "ha-149700"
	I0603 13:23:02.658290   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:23:02.658290   15052 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 13:23:02.661683   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:23:02.661683   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:23:02.661683   15052 client.go:168] LocalClient.Create starting
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:23:02.663685   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:23:02.663685   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:23:02.663685   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:23:06.600809   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:23:06.600867   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:06.600867   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:23:08.092889   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:23:08.092889   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:08.093065   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:23:11.805594   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:23:11.805803   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:11.808205   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:23:12.301798   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:23:12.600518   15052 main.go:141] libmachine: Creating VM...
	I0603 13:23:12.600890   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:15.505229   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:23:17.256503   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:23:17.257443   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:17.257555   15052 main.go:141] libmachine: Creating VHD
	I0603 13:23:17.257555   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:23:21.000794   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6C3DAB81-D3E4-465D-93E0-487E78DBE9F3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:23:21.000794   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:21.001716   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:23:21.001716   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:23:21.012967   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:23:24.158322   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:24.158322   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:24.158515   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd' -SizeBytes 20000MB
	I0603 13:23:26.700668   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:26.701363   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:26.701497   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:23:30.293322   15052 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-149700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:23:30.293322   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:30.294384   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700 -DynamicMemoryEnabled $false
	I0603 13:23:32.525424   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:32.525424   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:32.525645   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700 -Count 2
	I0603 13:23:34.688121   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:34.688483   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:34.688633   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\boot2docker.iso'
	I0603 13:23:37.322304   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:37.322424   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:37.322424   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd'
	I0603 13:23:39.966344   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:39.966597   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:39.966717   15052 main.go:141] libmachine: Starting VM...
	I0603 13:23:39.966765   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700
	I0603 13:23:43.020412   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:43.021256   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:43.021290   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:23:43.021290   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:45.253602   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:45.253799   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:45.253799   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:47.749527   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:47.749527   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:48.759800   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:50.984916   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:50.984916   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:50.985152   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:53.506570   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:53.507075   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:54.510248   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:56.697481   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:56.697481   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:56.698452   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:59.169924   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:59.170413   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:00.180091   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:02.485122   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:02.485213   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:02.485213   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:05.008919   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:24:05.008919   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:06.015954   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:08.278081   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:08.278231   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:08.278337   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:10.873641   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:10.874620   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:10.874742   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:13.053090   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:13.054058   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:13.054058   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:24:13.054058   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:17.761253   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:17.762305   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:17.767870   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:17.778210   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:17.778210   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:24:17.914383   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:24:17.914383   15052 buildroot.go:166] provisioning hostname "ha-149700"
	I0603 13:24:17.914946   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:22.526550   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:22.526550   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:22.544151   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:22.544845   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:22.544845   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700 && echo "ha-149700" | sudo tee /etc/hostname
	I0603 13:24:22.702323   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700
	
	I0603 13:24:22.702323   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:24.732534   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:24.732534   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:24.743687   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:27.196276   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:27.196276   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:27.212536   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:27.213102   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:27.213102   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:24:27.362606   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:24:27.362606   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:24:27.362606   15052 buildroot.go:174] setting up certificates
	I0603 13:24:27.362606   15052 provision.go:84] configureAuth start
	I0603 13:24:27.363161   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:29.442103   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:29.442103   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:29.454703   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:31.937506   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:31.937506   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:31.948526   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:33.980937   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:33.980937   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:33.992876   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:36.436525   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:36.448333   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:36.448333   15052 provision.go:143] copyHostCerts
	I0603 13:24:36.448535   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:24:36.448870   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:24:36.448946   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:24:36.449366   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:24:36.450675   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:24:36.450993   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:24:36.450993   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:24:36.450993   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:24:36.452060   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:24:36.452060   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:24:36.452642   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:24:36.452958   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:24:36.453823   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700 san=[127.0.0.1 172.22.153.250 ha-149700 localhost minikube]
	I0603 13:24:36.614064   15052 provision.go:177] copyRemoteCerts
	I0603 13:24:36.624718   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:24:36.624718   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:38.699126   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:38.699126   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:38.710513   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:41.106225   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:41.119523   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:41.119797   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:24:41.233934   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.609177s)
	I0603 13:24:41.233934   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:24:41.234564   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:24:41.278632   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:24:41.278632   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 13:24:41.313310   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:24:41.320690   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:24:41.355137   15052 provision.go:87] duration metric: took 13.9924152s to configureAuth
	I0603 13:24:41.355137   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:24:41.362195   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:24:41.362195   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:43.400999   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:43.412033   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:43.412033   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:45.805813   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:45.816423   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:45.822508   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:45.823040   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:45.823220   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:24:45.957871   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:24:45.957955   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:24:45.958142   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:24:45.958221   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:47.989961   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:47.989961   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:47.990052   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:50.381488   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:50.392253   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:50.398456   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:50.399063   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:50.399219   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:24:50.558987   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:24:50.558987   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:54.983560   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:54.983560   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:55.000888   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:55.001559   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:55.001559   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:24:57.148984   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:24:57.148984   15052 machine.go:97] duration metric: took 44.0945608s to provisionDockerMachine
	I0603 13:24:57.148984   15052 client.go:171] duration metric: took 1m54.4863569s to LocalClient.Create
	I0603 13:24:57.148984   15052 start.go:167] duration metric: took 1m54.4863569s to libmachine.API.Create "ha-149700"
	I0603 13:24:57.148984   15052 start.go:293] postStartSetup for "ha-149700" (driver="hyperv")
	I0603 13:24:57.148984   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:24:57.159789   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:24:57.159789   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:59.257239   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:59.257239   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:59.268152   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:01.699585   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:01.710420   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:01.710614   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:01.820812   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6609839s)
	I0603 13:25:01.831267   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:25:01.838997   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:25:01.839090   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:25:01.839542   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:25:01.839917   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:25:01.839917   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:25:01.851988   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:25:01.869309   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:25:01.912588   15052 start.go:296] duration metric: took 4.763564s for postStartSetup
	I0603 13:25:01.915943   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:03.908512   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:03.908512   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:03.919617   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:06.368965   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:06.368965   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:06.379687   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:25:06.383105   15052 start.go:128] duration metric: took 2m3.7237942s to createHost
	I0603 13:25:06.383290   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:08.358784   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:08.358784   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:08.369990   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:10.817917   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:10.817917   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:10.834799   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:25:10.834945   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:25:10.834945   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:25:10.974885   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421110.983953263
	
	I0603 13:25:10.974885   15052 fix.go:216] guest clock: 1717421110.983953263
	I0603 13:25:10.974885   15052 fix.go:229] Guest: 2024-06-03 13:25:10.983953263 +0000 UTC Remote: 2024-06-03 13:25:06.383105 +0000 UTC m=+129.573838201 (delta=4.600848263s)
	I0603 13:25:10.974885   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:15.451275   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:15.451275   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:15.456697   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:25:15.457543   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:25:15.457543   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421110
	I0603 13:25:15.601465   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:25:10 UTC 2024
	
	I0603 13:25:15.602021   15052 fix.go:236] clock set: Mon Jun  3 13:25:10 UTC 2024
	 (err=<nil>)
	I0603 13:25:15.602059   15052 start.go:83] releasing machines lock for "ha-149700", held for 2m12.9426343s
	I0603 13:25:15.602235   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:17.617381   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:17.627940   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:17.627940   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:20.021978   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:20.032664   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:20.037889   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:25:20.038024   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:20.046480   15052 ssh_runner.go:195] Run: cat /version.json
	I0603 13:25:20.046480   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:22.198248   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:22.198397   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:22.198397   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:22.206096   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:22.206629   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:22.206629   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:24.726677   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:24.726677   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:24.737483   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:24.759151   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:24.759151   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:24.759767   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:24.840241   15052 ssh_runner.go:235] Completed: cat /version.json: (4.7894203s)
	I0603 13:25:24.850395   15052 ssh_runner.go:195] Run: systemctl --version
	I0603 13:25:24.948416   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.909545s)
	I0603 13:25:24.960549   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:25:24.968672   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:25:24.979283   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:25:25.004051   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:25:25.004051   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:25:25.004165   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:25:25.046848   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:25:25.087326   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:25:25.106385   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:25:25.116439   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:25:25.150488   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:25:25.183566   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:25:25.214460   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:25:25.243720   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:25:25.272735   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:25:25.303391   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:25:25.334212   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:25:25.365143   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:25:25.394136   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:25:25.420574   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:25.602604   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:25:25.628109   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:25:25.641855   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:25:25.671968   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:25:25.702429   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:25:25.740985   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:25:25.772528   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:25:25.810908   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:25:25.867763   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:25:25.893304   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:25:25.936789   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:25:25.952893   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:25:25.969481   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:25:26.009771   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:25:26.197215   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:25:26.374711   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:25:26.374854   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:25:26.418445   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:26.596522   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:25:29.080378   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4838353s)
	I0603 13:25:29.099783   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:25:29.133358   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:25:29.173998   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:25:29.354108   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:25:29.544867   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:29.719028   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:25:29.755111   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:25:29.791777   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:29.961104   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:25:30.070180   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:25:30.082027   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:25:30.096157   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:25:30.108573   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:25:30.126725   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:25:30.180047   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:25:30.190874   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:25:30.234607   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:25:30.266834   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:25:30.267000   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:25:30.274317   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:25:30.274317   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:25:30.286678   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:25:30.289113   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:25:30.326570   15052 kubeadm.go:877] updating cluster {Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:25:30.326570   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:25:30.335177   15052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 13:25:30.358266   15052 docker.go:685] Got preloaded images: 
	I0603 13:25:30.358266   15052 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 13:25:30.371422   15052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 13:25:30.397520   15052 ssh_runner.go:195] Run: which lz4
	I0603 13:25:30.406190   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 13:25:30.416083   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:25:30.425889   15052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:25:30.425889   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 13:25:32.602573   15052 docker.go:649] duration metric: took 2.1961283s to copy over tarball
	I0603 13:25:32.615512   15052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:25:41.132677   15052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5170947s)
	I0603 13:25:41.132677   15052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:25:41.198936   15052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 13:25:41.219685   15052 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 13:25:41.268541   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:41.460379   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:25:44.392123   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9317196s)
	I0603 13:25:44.404585   15052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 13:25:44.424893   15052 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 13:25:44.424893   15052 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:25:44.424893   15052 kubeadm.go:928] updating node { 172.22.153.250 8443 v1.30.1 docker true true} ...
	I0603 13:25:44.424893   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.153.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:25:44.438080   15052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 13:25:44.472067   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:25:44.472067   15052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 13:25:44.472067   15052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:25:44.472067   15052 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.153.250 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-149700 NodeName:ha-149700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.153.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.153.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:25:44.472469   15052 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.153.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-149700"
	  kubeletExtraArgs:
	    node-ip: 172.22.153.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.153.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:25:44.472469   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:25:44.484194   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:25:44.507949   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:25:44.513841   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:25:44.534251   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:25:44.554405   15052 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:25:44.565567   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 13:25:44.580255   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0603 13:25:44.614980   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:25:44.641482   15052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 13:25:44.669171   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 13:25:44.707720   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:25:44.712456   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:25:44.749641   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:44.940318   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:25:44.972554   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.153.250
	I0603 13:25:44.972554   15052 certs.go:194] generating shared ca certs ...
	I0603 13:25:44.972554   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:44.973103   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:25:44.973758   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:25:44.974007   15052 certs.go:256] generating profile certs ...
	I0603 13:25:44.975000   15052 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:25:44.975110   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt with IP's: []
	I0603 13:25:45.211152   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt ...
	I0603 13:25:45.211152   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt: {Name:mkd40092c17fb57650e7b7fbf7406b5922892c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.211833   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key ...
	I0603 13:25:45.211833   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key: {Name:mkcf69de3b4a9d0e912390dcbe3d7781732b7884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.213267   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8
	I0603 13:25:45.214285   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.159.254]
	I0603 13:25:45.345867   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 ...
	I0603 13:25:45.345867   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8: {Name:mk68336b476a2079c07481702cd1c43f36b5b5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.347283   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8 ...
	I0603 13:25:45.347283   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8: {Name:mk20fc4aafb5f3cbc5faf210774bf49b7ab01a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.348947   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:25:45.356765   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:25:45.362196   15052 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:25:45.363766   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt with IP's: []
	I0603 13:25:45.459849   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt ...
	I0603 13:25:45.459849   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt: {Name:mk20f2de9c598d9a48f4f9f2e3b6b9b2a4e96582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.466739   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key ...
	I0603 13:25:45.466739   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key: {Name:mk507e8c3d191fe53b20c6ca6fc8eae567a9ed39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.468126   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:25:45.470533   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:25:45.478565   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:25:45.479283   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:25:45.479407   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:25:45.479548   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:25:45.479846   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:25:45.479846   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:45.480955   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:25:45.482780   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:25:45.527537   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:25:45.573232   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:25:45.615867   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:25:45.658020   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:25:45.700855   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:25:45.740483   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:25:45.786374   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:25:45.826029   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:25:45.862903   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:25:45.903634   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:25:45.945848   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:25:45.986472   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:25:46.005262   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:25:46.037466   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.044317   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.055929   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.075866   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:25:46.109316   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:25:46.140072   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.149241   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.159647   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.182534   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:25:46.214794   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:25:46.248014   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.251059   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.267340   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.290866   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:25:46.327040   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:25:46.336704   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:25:46.337107   15052 kubeadm.go:391] StartCluster: {Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:25:46.347489   15052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 13:25:46.377823   15052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 13:25:46.407277   15052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:25:46.434984   15052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:25:46.459495   15052 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:25:46.459540   15052 kubeadm.go:156] found existing configuration files:
	
	I0603 13:25:46.471002   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:25:46.485792   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:25:46.498743   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:25:46.528143   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:25:46.543229   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:25:46.555001   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:25:46.589878   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:25:46.608518   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:25:46.619178   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:25:46.648018   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:25:46.664095   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:25:46.676493   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:25:46.693673   15052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:25:47.078217   15052 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:26:01.248145   15052 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:26:01.248311   15052 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:26:01.248536   15052 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:26:01.248749   15052 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:26:01.248749   15052 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:26:01.248749   15052 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:26:01.252648   15052 out.go:204]   - Generating certificates and keys ...
	I0603 13:26:01.253004   15052 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:26:01.253168   15052 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-149700 localhost] and IPs [172.22.153.250 127.0.0.1 ::1]
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 13:26:01.254541   15052 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-149700 localhost] and IPs [172.22.153.250 127.0.0.1 ::1]
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:26:01.255208   15052 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:26:01.255367   15052 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:26:01.255974   15052 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:26:01.256223   15052 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:26:01.261213   15052 out.go:204]   - Booting up control plane ...
	I0603 13:26:01.261483   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:26:01.261648   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:26:01.261818   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:26:01.262098   15052 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:26:01.262446   15052 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:26:01.262552   15052 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:26:01.262963   15052 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:26:01.263196   15052 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:26:01.263250   15052 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.208001ms
	I0603 13:26:01.263250   15052 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:26:01.263250   15052 kubeadm.go:309] [api-check] The API server is healthy after 9.113220466s
	I0603 13:26:01.263828   15052 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:26:01.263871   15052 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:26:01.263871   15052 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:26:01.264512   15052 kubeadm.go:309] [mark-control-plane] Marking the node ha-149700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:26:01.264512   15052 kubeadm.go:309] [bootstrap-token] Using token: 5v14cf.t70vxkjeta9v5oor
	I0603 13:26:01.267349   15052 out.go:204]   - Configuring RBAC rules ...
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:26:01.268961   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:26:01.269058   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:26:01.269058   15052 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:26:01.269058   15052 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:26:01.269058   15052 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:26:01.269058   15052 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:26:01.269058   15052 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:26:01.269058   15052 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:26:01.269058   15052 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:26:01.269058   15052 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:26:01.271693   15052 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5v14cf.t70vxkjeta9v5oor \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--control-plane 
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5v14cf.t70vxkjeta9v5oor \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 13:26:01.271693   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:26:01.271693   15052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 13:26:01.274785   15052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 13:26:01.291933   15052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 13:26:01.300665   15052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 13:26:01.300665   15052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 13:26:01.349173   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 13:26:02.037816   15052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:26:02.053849   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700 minikube.k8s.io/updated_at=2024_06_03T13_26_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=true
	I0603 13:26:02.054393   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:02.067248   15052 ops.go:34] apiserver oom_adj: -16
	I0603 13:26:02.270112   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:02.771826   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:03.271881   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:03.784185   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:04.276248   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:04.775338   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:05.276629   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:05.776157   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:06.278835   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:06.786505   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:07.283063   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:07.772662   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:08.271795   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:08.781482   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:09.276934   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:09.786075   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:10.270117   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:10.785010   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:11.285597   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:11.783498   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:12.272979   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:12.786406   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:13.275225   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:13.440668   15052 kubeadm.go:1107] duration metric: took 11.4029141s to wait for elevateKubeSystemPrivileges
	W0603 13:26:13.440668   15052 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:26:13.440668   15052 kubeadm.go:393] duration metric: took 27.1034717s to StartCluster
	I0603 13:26:13.440668   15052 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:26:13.440668   15052 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:26:13.444308   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:26:13.445491   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 13:26:13.445491   15052 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:26:13.445491   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:26:13.445491   15052 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:26:13.445491   15052 addons.go:69] Setting default-storageclass=true in profile "ha-149700"
	I0603 13:26:13.446035   15052 addons.go:69] Setting storage-provisioner=true in profile "ha-149700"
	I0603 13:26:13.446035   15052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-149700"
	I0603 13:26:13.446167   15052 addons.go:234] Setting addon storage-provisioner=true in "ha-149700"
	I0603 13:26:13.446313   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:26:13.446313   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:26:13.447358   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:13.447757   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:13.605680   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.22.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 13:26:13.994779   15052 start.go:946] {"host.minikube.internal": 172.22.144.1} host record injected into CoreDNS's ConfigMap
	I0603 13:26:15.734189   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:15.734189   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:15.740627   15052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:26:15.745447   15052 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:26:15.745529   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:26:15.745652   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:15.937360   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:15.937692   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:15.938667   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:26:15.939292   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 13:26:15.940822   15052 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 13:26:15.941521   15052 addons.go:234] Setting addon default-storageclass=true in "ha-149700"
	I0603 13:26:15.941584   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:26:15.942945   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:17.998627   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:18.004614   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:18.004817   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:26:18.179336   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:18.179336   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:18.179336   15052 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:26:18.191831   15052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:26:18.191896   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:20.449805   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:20.455722   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:20.455902   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:26:20.757446   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:26:20.757446   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:20.757915   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:26:20.917319   15052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:26:23.031955   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:26:23.043363   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:23.043503   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:26:23.181225   15052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:26:23.320909   15052 round_trippers.go:463] GET https://172.22.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 13:26:23.320909   15052 round_trippers.go:469] Request Headers:
	I0603 13:26:23.320909   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:26:23.320909   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:26:23.332465   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:26:23.333200   15052 round_trippers.go:463] PUT https://172.22.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 13:26:23.333200   15052 round_trippers.go:469] Request Headers:
	I0603 13:26:23.333200   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:26:23.333200   15052 round_trippers.go:473]     Content-Type: application/json
	I0603 13:26:23.333200   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:26:23.336724   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:26:23.342692   15052 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 13:26:23.345167   15052 addons.go:510] duration metric: took 9.8995932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 13:26:23.345283   15052 start.go:245] waiting for cluster config update ...
	I0603 13:26:23.345283   15052 start.go:254] writing updated cluster config ...
	I0603 13:26:23.348004   15052 out.go:177] 
	I0603 13:26:23.359402   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:26:23.359733   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:26:23.359982   15052 out.go:177] * Starting "ha-149700-m02" control-plane node in "ha-149700" cluster
	I0603 13:26:23.365962   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:26:23.365962   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:26:23.365962   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:26:23.370181   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:26:23.370374   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:26:23.371071   15052 start.go:360] acquireMachinesLock for ha-149700-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:26:23.371071   15052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-149700-m02"
	I0603 13:26:23.372853   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:26:23.372853   15052 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 13:26:23.373772   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:26:23.375749   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:26:23.375749   15052 client.go:168] LocalClient.Create starting
	I0603 13:26:23.375749   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:26:23.376363   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:26:23.376442   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:26:25.182459   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:26:25.182459   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:25.190703   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:26:26.950329   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:26:26.950329   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:26.951563   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:26:28.390556   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:26:28.390556   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:28.391801   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:26:31.858648   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:26:31.858648   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:31.861007   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:26:32.339528   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:26:33.029688   15052 main.go:141] libmachine: Creating VM...
	I0603 13:26:33.030310   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:35.798351   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:37.519314   15052 main.go:141] libmachine: Creating VHD
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:26:41.198703   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 42D308E2-C6AA-49D1-88E4-01A60A34AA2A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:26:41.198703   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:41.198703   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:26:41.198703   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:26:41.208407   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:26:44.306333   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:44.315055   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:44.315055   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd' -SizeBytes 20000MB
	I0603 13:26:46.779376   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:46.779376   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:46.779533   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-149700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700-m02 -DynamicMemoryEnabled $false
	I0603 13:26:52.450139   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:52.460497   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:52.460497   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700-m02 -Count 2
	I0603 13:26:54.527184   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:54.527184   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:54.536240   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\boot2docker.iso'
	I0603 13:26:57.003684   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:57.012903   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:57.012965   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd'
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:59.701873   15052 main.go:141] libmachine: Starting VM...
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700-m02
	I0603 13:27:02.870372   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:02.870372   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:02.870372   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:27:02.873843   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:05.125466   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:05.133960   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:05.134039   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:07.595877   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:07.595877   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:08.608280   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:10.752042   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:10.752669   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:10.752669   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:13.189611   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:13.192476   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:14.193658   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:16.340851   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:16.350737   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:16.350737   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:18.825973   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:18.825973   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:19.837035   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:21.968263   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:21.975575   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:21.975575   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:24.492462   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:24.492462   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:25.499562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:30.280091   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:30.291964   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:30.291964   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:32.413454   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:32.413454   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:32.423977   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:27:32.424275   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:34.531811   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:34.531811   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:34.532008   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:37.038005   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:37.038283   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:37.044423   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:37.044574   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:37.045163   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:27:37.174273   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:27:37.174273   15052 buildroot.go:166] provisioning hostname "ha-149700-m02"
	I0603 13:27:37.174273   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:39.255739   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:39.255739   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:39.266184   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:41.761420   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:41.761420   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:41.779651   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:41.780137   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:41.780226   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700-m02 && echo "ha-149700-m02" | sudo tee /etc/hostname
	I0603 13:27:41.937014   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700-m02
	
	I0603 13:27:41.937014   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:44.036454   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:44.036454   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:44.048757   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:46.542534   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:46.542534   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:46.557589   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:46.557589   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:46.557589   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:27:46.699968   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:27:46.699968   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:27:46.699968   15052 buildroot.go:174] setting up certificates
	I0603 13:27:46.699968   15052 provision.go:84] configureAuth start
	I0603 13:27:46.699968   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:48.767318   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:48.767318   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:48.772848   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:53.301033   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:53.310589   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:53.310732   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:55.737200   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:55.747661   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:55.747661   15052 provision.go:143] copyHostCerts
	I0603 13:27:55.747925   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:27:55.748243   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:27:55.748243   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:27:55.748812   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:27:55.750142   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:27:55.750556   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:27:55.750556   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:27:55.750664   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:27:55.751906   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:27:55.751980   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:27:55.751980   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:27:55.752623   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:27:55.753385   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700-m02 san=[127.0.0.1 172.22.154.57 ha-149700-m02 localhost minikube]
	I0603 13:27:55.941777   15052 provision.go:177] copyRemoteCerts
	I0603 13:27:55.952414   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:27:55.952414   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:58.010790   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:58.020312   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:58.020312   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:00.433288   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:00.444122   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:00.444404   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:00.550347   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5978078s)
	I0603 13:28:00.550424   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:28:00.550474   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:28:00.593507   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:28:00.593507   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 13:28:00.637744   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:28:00.638133   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:28:00.678955   15052 provision.go:87] duration metric: took 13.9788718s to configureAuth
	I0603 13:28:00.679070   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:28:00.679750   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:28:00.679750   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:02.744660   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:02.754643   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:02.754643   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:05.168486   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:05.179367   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:05.185509   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:05.185509   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:05.186101   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:28:05.317399   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:28:05.317497   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:28:05.317677   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:28:05.317880   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:07.394286   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:07.399922   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:07.399922   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:09.875707   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:09.875707   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:09.882684   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:09.883375   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:09.883375   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.153.250"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:28:10.037701   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.153.250
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:28:10.037803   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:14.551492   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:14.562458   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:14.568736   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:14.568830   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:14.568830   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:28:16.634766   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:28:16.634879   15052 machine.go:97] duration metric: took 44.210536s to provisionDockerMachine
	I0603 13:28:16.634879   15052 client.go:171] duration metric: took 1m53.2581908s to LocalClient.Create
	I0603 13:28:16.634879   15052 start.go:167] duration metric: took 1m53.2581908s to libmachine.API.Create "ha-149700"
	I0603 13:28:16.634879   15052 start.go:293] postStartSetup for "ha-149700-m02" (driver="hyperv")
	I0603 13:28:16.634879   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:28:16.646878   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:28:16.646878   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:18.718873   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:18.718873   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:18.729699   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:21.164696   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:21.164696   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:21.174923   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:21.284908   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6379919s)
	I0603 13:28:21.296216   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:28:21.305185   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:28:21.305185   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:28:21.305907   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:28:21.307045   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:28:21.307112   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:28:21.318141   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:28:21.338414   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:28:21.382954   15052 start.go:296] duration metric: took 4.7480358s for postStartSetup
	I0603 13:28:21.385675   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:23.482324   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:23.482324   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:23.482472   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:25.950235   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:25.950235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:25.960385   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:28:25.962948   15052 start.go:128] duration metric: took 2m2.5889728s to createHost
	I0603 13:28:25.963037   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:28.017950   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:28.028400   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:28.028400   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:30.456482   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:30.466513   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:30.471890   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:30.472619   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:30.472619   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:28:30.606907   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421310.609726096
	
	I0603 13:28:30.606907   15052 fix.go:216] guest clock: 1717421310.609726096
	I0603 13:28:30.606907   15052 fix.go:229] Guest: 2024-06-03 13:28:30.609726096 +0000 UTC Remote: 2024-06-03 13:28:25.9629487 +0000 UTC m=+329.152027201 (delta=4.646777396s)
	I0603 13:28:30.606907   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:35.079610   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:35.079610   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:35.098534   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:35.099040   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:35.099097   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421310
	I0603 13:28:35.241426   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:28:30 UTC 2024
	
	I0603 13:28:35.241426   15052 fix.go:236] clock set: Mon Jun  3 13:28:30 UTC 2024
	 (err=<nil>)
	I0603 13:28:35.241426   15052 start.go:83] releasing machines lock for "ha-149700-m02", held for 2m11.867636s
	I0603 13:28:35.242106   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:37.308361   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:37.308627   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:37.308627   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:39.773580   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:39.773646   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:39.781728   15052 out.go:177] * Found network options:
	I0603 13:28:39.784066   15052 out.go:177]   - NO_PROXY=172.22.153.250
	W0603 13:28:39.786860   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:28:39.788955   15052 out.go:177]   - NO_PROXY=172.22.153.250
	W0603 13:28:39.791476   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:28:39.792934   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:28:39.793420   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:28:39.793420   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:39.798396   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:28:39.798396   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:41.934598   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:41.934760   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:41.934760   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:41.972760   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:41.973120   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:41.973120   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:44.478130   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:44.478130   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:44.478130   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:44.503248   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:44.503248   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:44.504838   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:44.566373   15052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.767938s)
	W0603 13:28:44.566373   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:28:44.580075   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:28:44.842809   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:28:44.842946   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:28:44.842946   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0494841s)
	I0603 13:28:44.843029   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:28:44.887596   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:28:44.918380   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:28:44.935196   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:28:44.947173   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:28:44.975105   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:28:45.006088   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:28:45.034679   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:28:45.068502   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:28:45.100251   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:28:45.129981   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:28:45.159328   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:28:45.191917   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:28:45.220515   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:28:45.249195   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:45.433581   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:28:45.464127   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:28:45.476812   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:28:45.513000   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:28:45.548426   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:28:45.582583   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:28:45.619289   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:28:45.654075   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:28:45.713688   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:28:45.735183   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:28:45.784476   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:28:45.803319   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:28:45.822848   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:28:45.864576   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:28:46.070335   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:28:46.246159   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:28:46.246159   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:28:46.290892   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:46.475516   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:28:48.962273   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4867365s)
	I0603 13:28:48.976004   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:28:49.018930   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:28:49.055781   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:28:49.242449   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:28:49.425546   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:49.612903   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:28:49.653295   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:28:49.686640   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:49.870462   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:28:49.970135   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:28:49.982958   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:28:49.992982   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:28:50.004725   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:28:50.022270   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:28:50.082427   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:28:50.092200   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:28:50.130445   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:28:50.162776   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:28:50.165368   15052 out.go:177]   - env NO_PROXY=172.22.153.250
	I0603 13:28:50.168180   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:28:50.174487   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:28:50.174487   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:28:50.187406   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:28:50.194171   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:28:50.214218   15052 mustload.go:65] Loading cluster: ha-149700
	I0603 13:28:50.214833   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:28:50.215362   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:28:52.256472   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:52.256472   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:52.265716   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:28:52.265985   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.154.57
	I0603 13:28:52.265985   15052 certs.go:194] generating shared ca certs ...
	I0603 13:28:52.265985   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.267374   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:28:52.267744   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:28:52.267906   15052 certs.go:256] generating profile certs ...
	I0603 13:28:52.268627   15052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:28:52.268703   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0
	I0603 13:28:52.268854   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.154.57 172.22.159.254]
	I0603 13:28:52.402707   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 ...
	I0603 13:28:52.402707   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0: {Name:mkf4a9eb687790cb623fb705825c463597bc32ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.410570   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0 ...
	I0603 13:28:52.410570   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0: {Name:mk6a70665679a6c2cb0a4ffbe757b331292f3a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.412974   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:28:52.424815   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:28:52.426336   15052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:28:52.426894   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:28:52.427131   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:28:52.427131   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:28:52.427726   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:28:52.427726   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:28:52.428657   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:28:52.428948   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:28:52.428948   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:28:52.429580   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:28:52.430005   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:28:52.430339   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:28:52.430451   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:28:52.430451   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:28:52.431085   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:28:52.431220   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:52.431412   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:28:54.470978   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:54.470978   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:54.482321   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:56.919132   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:28:56.930251   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:56.930443   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:28:57.036492   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 13:28:57.047378   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 13:28:57.079124   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 13:28:57.087293   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 13:28:57.117378   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 13:28:57.120313   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 13:28:57.157391   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 13:28:57.163239   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 13:28:57.193061   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 13:28:57.196254   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 13:28:57.229212   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 13:28:57.236843   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 13:28:57.253851   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:28:57.302029   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:28:57.349019   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:28:57.393306   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:28:57.440571   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 13:28:57.493971   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:28:57.553685   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:28:57.600724   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:28:57.646616   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:28:57.690112   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:28:57.730626   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:28:57.777177   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 13:28:57.819798   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 13:28:57.857258   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 13:28:57.889603   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 13:28:57.919090   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 13:28:57.952236   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 13:28:57.983344   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 13:28:58.023491   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:28:58.044069   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:28:58.077341   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.084717   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.094757   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.119336   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:28:58.151941   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:28:58.184272   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.192018   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.204151   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.225785   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:28:58.258025   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:28:58.290530   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.297137   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.315010   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.334136   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:28:58.367776   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:28:58.374986   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:28:58.375233   15052 kubeadm.go:928] updating node {m02 172.22.154.57 8443 v1.30.1 docker true true} ...
	I0603 13:28:58.375233   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.154.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:28:58.375233   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:28:58.387078   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:28:58.412222   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:28:58.412503   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:28:58.424857   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:28:58.441675   15052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 13:28:58.453351   15052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 13:28:58.472434   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0603 13:28:58.472753   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0603 13:28:58.472753   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0603 13:28:59.511199   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:28:59.520154   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:28:59.533432   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 13:28:59.533432   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 13:29:01.277130   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:29:01.287605   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:29:01.300831   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 13:29:01.300951   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 13:29:02.958744   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:29:02.983587   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:29:02.994599   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:29:03.003819   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 13:29:03.003988   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 13:29:03.677042   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 13:29:03.693378   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 13:29:03.726053   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:29:03.754414   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 13:29:03.797226   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:29:03.804099   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:29:03.838970   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:29:04.017559   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:29:04.044776   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:29:04.045807   15052 start.go:316] joinCluster: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:29:04.046087   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 13:29:04.046145   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:29:06.069546   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:29:06.069546   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:29:06.080095   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:29:08.584022   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:29:08.584022   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:29:08.584228   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:29:08.786876   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7407502s)
	I0603 13:29:08.786950   15052 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:29:08.786950   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lli69i.sq06vzkgggvy6rlu --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m02 --control-plane --apiserver-advertise-address=172.22.154.57 --apiserver-bind-port=8443"
	I0603 13:29:50.868555   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lli69i.sq06vzkgggvy6rlu --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m02 --control-plane --apiserver-advertise-address=172.22.154.57 --apiserver-bind-port=8443": (42.0812591s)
	I0603 13:29:50.868693   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 13:29:51.652571   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700-m02 minikube.k8s.io/updated_at=2024_06_03T13_29_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=false
	I0603 13:29:51.825727   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-149700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 13:29:52.015180   15052 start.go:318] duration metric: took 47.9689797s to joinCluster
	I0603 13:29:52.015180   15052 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:29:52.017492   15052 out.go:177] * Verifying Kubernetes components...
	I0603 13:29:52.015180   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:29:52.034513   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:29:52.413981   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:29:52.445731   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:29:52.446513   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 13:29:52.446666   15052 kubeadm.go:477] Overriding stale ClientConfig host https://172.22.159.254:8443 with https://172.22.153.250:8443
	I0603 13:29:52.447561   15052 node_ready.go:35] waiting up to 6m0s for node "ha-149700-m02" to be "Ready" ...
	I0603 13:29:52.447561   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:52.447561   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:52.447561   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:52.447561   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:52.462269   15052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 13:29:52.954176   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:52.954240   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:52.954240   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:52.954240   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:52.967625   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:29:53.448452   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:53.448452   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:53.448696   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:53.448696   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:53.452433   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:29:53.957167   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:53.957167   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:53.957167   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:53.957167   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:53.964407   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:29:54.463333   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:54.463333   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:54.463447   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:54.463447   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:54.468385   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:54.468920   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:54.953094   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:54.953205   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:54.953205   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:54.953205   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:54.960845   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:29:55.458917   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:55.459019   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:55.459019   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:55.459019   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:55.464195   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:29:55.948253   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:55.948253   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:55.948479   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:55.948479   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:55.955021   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:29:56.457554   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:56.457554   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:56.457554   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:56.457554   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:56.463293   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:29:56.963423   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:56.963502   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:56.963502   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:56.963544   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:56.979154   15052 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 13:29:56.986390   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:57.452091   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:57.452091   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:57.452091   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:57.452091   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:57.456703   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:57.958820   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:57.958820   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:57.958911   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:57.958911   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:57.967830   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:29:58.463726   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:58.463978   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:58.463978   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:58.463978   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:58.470640   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:29:58.950887   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:58.950887   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:58.950976   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:58.950976   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:58.955658   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:59.451510   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:59.451714   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:59.451714   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:59.451714   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:59.455975   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:59.457549   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:59.957338   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:59.957532   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:59.957532   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:59.957532   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:59.962709   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:00.460412   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:00.460500   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.460500   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.460500   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.465493   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.467447   15052 node_ready.go:49] node "ha-149700-m02" has status "Ready":"True"
	I0603 13:30:00.467514   15052 node_ready.go:38] duration metric: took 8.0198873s for node "ha-149700-m02" to be "Ready" ...
	I0603 13:30:00.467514   15052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:30:00.467514   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:00.467514   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.467739   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.467759   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.476103   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:00.487900   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.487900   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6qmlg
	I0603 13:30:00.488436   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.488436   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.488473   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.492237   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:00.493882   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.493985   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.493985   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.493985   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.497788   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:00.499016   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.499016   15052 pod_ready.go:81] duration metric: took 11.1154ms for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.499212   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.499212   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ptqqz
	I0603 13:30:00.499318   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.499318   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.499318   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.506306   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:00.506524   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.507109   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.507109   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.507109   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.514324   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:00.515051   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.515051   15052 pod_ready.go:81] duration metric: took 15.8387ms for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.515051   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.515126   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700
	I0603 13:30:00.515204   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.515204   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.515204   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.522393   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:00.523755   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.523909   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.523909   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.523981   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.528442   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.529116   15052 pod_ready.go:92] pod "etcd-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.529162   15052 pod_ready.go:81] duration metric: took 14.1118ms for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.529162   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.529299   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:00.529299   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.529299   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.529299   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.533574   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.535141   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:00.535187   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.535187   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.535187   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.538399   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:01.035001   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:01.035001   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.035001   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.035001   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.040608   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:01.041747   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:01.041747   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.041747   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.041747   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.047237   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:01.532603   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:01.532697   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.532697   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.532697   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.536174   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:01.537589   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:01.537589   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.537589   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.537589   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.542236   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.032233   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:02.032305   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.032305   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.032305   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.040449   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:02.041188   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:02.041188   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.041188   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.041188   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.046008   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.533417   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:02.533417   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.533417   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.533417   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.538351   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.539720   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:02.539720   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.539720   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.539720   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.543773   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.544919   15052 pod_ready.go:102] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 13:30:03.031327   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:03.031577   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.031577   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.031577   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.036885   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:03.038036   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:03.038036   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.038036   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.038172   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.043312   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:03.544573   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:03.544675   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.544675   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.544675   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.549618   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:03.549923   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:03.549923   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.549923   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.549923   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.554676   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:04.042693   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:04.042965   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.042965   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.042965   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.051286   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:04.052197   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:04.052197   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.052197   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.052197   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.060935   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:04.544230   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:04.544435   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.544435   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.544435   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.555032   15052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 13:30:04.556027   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:04.556027   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.556027   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.556027   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.559288   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:04.561008   15052 pod_ready.go:102] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 13:30:05.044010   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:05.044117   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.044117   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.044117   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.049389   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:05.049885   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.049885   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.049885   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.049885   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.055061   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:05.056548   15052 pod_ready.go:92] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.056630   15052 pod_ready.go:81] duration metric: took 4.5273488s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.056630   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.056744   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700
	I0603 13:30:05.056772   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.056772   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.056772   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.060419   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:05.061193   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.061193   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.061193   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.061193   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.065633   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:05.066960   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.067051   15052 pod_ready.go:81] duration metric: took 10.4214ms for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.067051   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.067138   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:30:05.067138   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.067138   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.067138   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.073282   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:05.074254   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.074254   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.074840   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.074840   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.078835   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:05.078835   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.078835   15052 pod_ready.go:81] duration metric: took 11.7837ms for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.078835   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.078835   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:30:05.079831   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.079831   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.079831   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.087786   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:05.260639   15052 request.go:629] Waited for 171.5741ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.260930   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.260930   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.260930   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.261011   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.269512   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:05.270436   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.270436   15052 pod_ready.go:81] duration metric: took 191.5993ms for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.270436   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.462406   15052 request.go:629] Waited for 191.2509ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:30:05.462605   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:30:05.462605   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.462684   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.462684   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.471322   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:05.667451   15052 request.go:629] Waited for 194.8664ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.667607   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.667607   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.667607   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.667666   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.672058   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:05.673463   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.673519   15052 pod_ready.go:81] duration metric: took 403.0797ms for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.673519   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.871763   15052 request.go:629] Waited for 197.9871ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:30:05.872006   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:30:05.872075   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.872075   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.872075   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.879276   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:06.061846   15052 request.go:629] Waited for 181.3424ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.061846   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.061846   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.061846   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.062123   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.067604   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.068423   15052 pod_ready.go:92] pod "kube-proxy-9wjpn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.068521   15052 pod_ready.go:81] duration metric: took 394.9987ms for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.068521   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.267350   15052 request.go:629] Waited for 198.4714ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:30:06.267520   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:30:06.267634   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.267634   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.267634   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.272942   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.469661   15052 request.go:629] Waited for 195.5802ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:06.469931   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:06.469931   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.469931   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.469931   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.474972   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.476322   15052 pod_ready.go:92] pod "kube-proxy-vbzvt" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.476322   15052 pod_ready.go:81] duration metric: took 407.7974ms for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.476322   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.672423   15052 request.go:629] Waited for 195.9004ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:30:06.672591   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:30:06.672591   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.672591   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.672591   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.676204   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:06.862303   15052 request.go:629] Waited for 184.2693ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.862410   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.862500   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.862500   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.862500   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.867907   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.868299   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.868299   15052 pod_ready.go:81] duration metric: took 391.9743ms for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.868299   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:07.068758   15052 request.go:629] Waited for 200.2059ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:30:07.068889   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:30:07.068889   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.068889   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.069085   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.076486   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:07.271302   15052 request.go:629] Waited for 193.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:07.271302   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:07.271302   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.271302   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.271302   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.277006   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:07.279783   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:07.279783   15052 pod_ready.go:81] duration metric: took 411.4807ms for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:07.279783   15052 pod_ready.go:38] duration metric: took 6.8122135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:30:07.279783   15052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:30:07.291986   15052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:30:07.320968   15052 api_server.go:72] duration metric: took 15.3056628s to wait for apiserver process to appear ...
	I0603 13:30:07.321002   15052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:30:07.321102   15052 api_server.go:253] Checking apiserver healthz at https://172.22.153.250:8443/healthz ...
	I0603 13:30:07.331095   15052 api_server.go:279] https://172.22.153.250:8443/healthz returned 200:
	ok
	I0603 13:30:07.331132   15052 round_trippers.go:463] GET https://172.22.153.250:8443/version
	I0603 13:30:07.331132   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.331132   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.331132   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.333131   15052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 13:30:07.333378   15052 api_server.go:141] control plane version: v1.30.1
	I0603 13:30:07.333378   15052 api_server.go:131] duration metric: took 12.309ms to wait for apiserver health ...
	I0603 13:30:07.333378   15052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:30:07.474670   15052 request.go:629] Waited for 141.0662ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.474670   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.474670   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.474670   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.474670   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.484355   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:30:07.493027   15052 system_pods.go:59] 17 kube-system pods found
	I0603 13:30:07.493027   15052 system_pods.go:61] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:30:07.494128   15052 system_pods.go:61] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:30:07.494128   15052 system_pods.go:61] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:30:07.494128   15052 system_pods.go:74] duration metric: took 160.7492ms to wait for pod list to return data ...
	I0603 13:30:07.494128   15052 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:30:07.675477   15052 request.go:629] Waited for 181.103ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:30:07.675477   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:30:07.675477   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.675477   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.675477   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.681932   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:07.683288   15052 default_sa.go:45] found service account: "default"
	I0603 13:30:07.683394   15052 default_sa.go:55] duration metric: took 189.2638ms for default service account to be created ...
	I0603 13:30:07.683394   15052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:30:07.862294   15052 request.go:629] Waited for 178.6395ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.862409   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.862409   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.862409   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.862409   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.870950   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:07.877953   15052 system_pods.go:86] 17 kube-system pods found
	I0603 13:30:07.878095   15052 system_pods.go:89] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:30:07.878095   15052 system_pods.go:126] duration metric: took 194.7ms to wait for k8s-apps to be running ...
	I0603 13:30:07.878095   15052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:30:07.888204   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:30:07.913061   15052 system_svc.go:56] duration metric: took 34.9657ms WaitForService to wait for kubelet
	I0603 13:30:07.913476   15052 kubeadm.go:576] duration metric: took 15.8981662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:30:07.913545   15052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:30:08.066340   15052 request.go:629] Waited for 152.5797ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes
	I0603 13:30:08.066340   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes
	I0603 13:30:08.066441   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:08.066441   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:08.066441   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:08.072780   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:08.074014   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:30:08.074014   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:30:08.074093   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:30:08.074093   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:30:08.074093   15052 node_conditions.go:105] duration metric: took 160.5468ms to run NodePressure ...
	I0603 13:30:08.074093   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:30:08.074152   15052 start.go:254] writing updated cluster config ...
	I0603 13:30:08.078758   15052 out.go:177] 
	I0603 13:30:08.094685   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:30:08.094685   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:30:08.103583   15052 out.go:177] * Starting "ha-149700-m03" control-plane node in "ha-149700" cluster
	I0603 13:30:08.107025   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:30:08.107025   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:30:08.107925   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:30:08.107925   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:30:08.107925   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:30:08.115050   15052 start.go:360] acquireMachinesLock for ha-149700-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:30:08.115050   15052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-149700-m03"
	I0603 13:30:08.115050   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:30:08.115050   15052 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0603 13:30:08.118434   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:30:08.119164   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:30:08.119164   15052 client.go:168] LocalClient.Create starting
	I0603 13:30:08.119276   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:30:08.119853   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:08.119853   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:08.120063   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:30:08.120363   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:08.120363   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:08.120363   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:30:10.015264   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:30:10.015264   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:10.015562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:30:11.741480   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:30:11.741480   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:11.741974   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:30:13.220804   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:30:13.220804   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:13.221126   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:30:17.005641   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:30:17.005641   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:17.007675   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:30:17.454215   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:30:17.825622   15052 main.go:141] libmachine: Creating VM...
	I0603 13:30:17.826094   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:30:20.775235   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:30:20.775235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:20.775727   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:30:20.775727   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:30:22.589318   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:30:22.589562   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:22.589562   15052 main.go:141] libmachine: Creating VHD
	I0603 13:30:22.589562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:30:26.382157   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 19722408-E759-4665-8C15-7BCF2EB0A2DC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:30:26.382157   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:26.382157   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:30:26.382411   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:30:26.392212   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:30:29.644578   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:29.644578   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:29.645582   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd' -SizeBytes 20000MB
	I0603 13:30:32.228014   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:32.228014   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:32.228486   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:30:36.056864   15052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-149700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:30:36.057564   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:36.057643   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700-m03 -DynamicMemoryEnabled $false
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700-m03 -Count 2
	I0603 13:30:40.667864   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:40.667864   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:40.668696   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\boot2docker.iso'
	I0603 13:30:43.347702   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:43.348461   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:43.348602   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd'
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:46.040459   15052 main.go:141] libmachine: Starting VM...
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700-m03
	I0603 13:30:49.180909   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:49.180909   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:49.180909   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:30:49.181040   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:30:54.147172   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:54.147172   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:55.158279   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:30:57.446823   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:30:57.446823   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:57.447001   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:00.068774   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:00.069775   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:01.070935   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:03.337695   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:03.337747   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:03.337747   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:05.973988   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:05.973988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:06.981788   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:09.292477   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:09.292477   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:09.293673   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:11.894224   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:11.894224   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:12.907173   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:15.184116   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:15.184116   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:15.184399   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:17.858045   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:17.858397   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:17.858397   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:20.074722   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:20.074722   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:20.074722   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:31:20.074912   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:22.348883   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:22.348883   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:22.349091   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:24.964972   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:24.964972   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:24.970822   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:24.982611   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:24.982611   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:31:25.117662   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:31:25.117773   15052 buildroot.go:166] provisioning hostname "ha-149700-m03"
	I0603 13:31:25.117893   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:27.347138   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:27.347687   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:27.347776   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:30.005863   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:30.005863   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:30.014397   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:30.014397   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:30.014397   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700-m03 && echo "ha-149700-m03" | sudo tee /etc/hostname
	I0603 13:31:30.178389   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700-m03
	
	I0603 13:31:30.179496   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:35.015458   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:35.016320   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:35.021944   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:35.022645   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:35.022645   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:31:35.178693   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:31:35.179228   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:31:35.179266   15052 buildroot.go:174] setting up certificates
	I0603 13:31:35.179266   15052 provision.go:84] configureAuth start
	I0603 13:31:35.179389   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:37.413519   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:37.413519   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:37.413736   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:42.245041   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:42.245645   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:42.245701   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:44.856230   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:44.856721   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:44.856721   15052 provision.go:143] copyHostCerts
	I0603 13:31:44.856879   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:31:44.857150   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:31:44.857150   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:31:44.857637   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:31:44.858797   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:31:44.859048   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:31:44.859048   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:31:44.859531   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:31:44.860776   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:31:44.861090   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:31:44.861149   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:31:44.861176   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:31:44.862166   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700-m03 san=[127.0.0.1 172.22.150.43 ha-149700-m03 localhost minikube]
	I0603 13:31:44.976898   15052 provision.go:177] copyRemoteCerts
	I0603 13:31:44.989314   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:31:44.989314   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:49.885247   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:49.886129   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:49.886299   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:31:49.991825   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0024692s)
	I0603 13:31:49.991825   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:31:49.991825   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:31:50.041834   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:31:50.042379   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 13:31:50.095009   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:31:50.095564   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:31:50.146141   15052 provision.go:87] duration metric: took 14.9666891s to configureAuth
	I0603 13:31:50.146264   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:31:50.147069   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:31:50.147187   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:52.320460   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:52.321484   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:52.321533   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:54.921348   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:54.921411   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:54.927585   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:54.927585   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:54.927585   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:31:55.064169   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:31:55.064262   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:31:55.064585   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:31:55.064662   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:57.260482   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:57.260482   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:57.260629   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:59.865074   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:59.865074   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:59.870830   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:59.871509   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:59.871509   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.153.250"
	Environment="NO_PROXY=172.22.153.250,172.22.154.57"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:32:00.039715   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.153.250
	Environment=NO_PROXY=172.22.153.250,172.22.154.57
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:32:00.039799   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:02.204696   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:02.204696   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:02.204878   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:04.787867   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:04.788637   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:04.797317   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:04.797317   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:04.797317   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:32:07.025186   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:32:07.025738   15052 machine.go:97] duration metric: took 46.950631s to provisionDockerMachine
	I0603 13:32:07.025738   15052 client.go:171] duration metric: took 1m58.9054871s to LocalClient.Create
	I0603 13:32:07.025878   15052 start.go:167] duration metric: took 1m58.9057386s to libmachine.API.Create "ha-149700"
	I0603 13:32:07.025878   15052 start.go:293] postStartSetup for "ha-149700-m03" (driver="hyperv")
	I0603 13:32:07.025878   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:32:07.040879   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:32:07.040879   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:09.221392   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:09.221392   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:09.221811   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:11.872771   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:11.873572   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:11.873690   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:11.988145   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9472252s)
	I0603 13:32:12.000957   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:32:12.008518   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:32:12.008636   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:32:12.009126   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:32:12.010124   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:32:12.010124   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:32:12.022455   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:32:12.043727   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:32:12.095244   15052 start.go:296] duration metric: took 5.0693246s for postStartSetup
	I0603 13:32:12.098116   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:14.284282   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:14.284988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:14.284988   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:16.905317   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:16.905317   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:16.906089   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:32:16.908563   15052 start.go:128] duration metric: took 2m8.7924569s to createHost
	I0603 13:32:16.908625   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:19.136241   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:19.137152   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:19.137285   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:21.803366   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:21.803366   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:21.809757   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:21.810340   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:21.810541   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:32:21.944831   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421541.939964799
	
	I0603 13:32:21.944918   15052 fix.go:216] guest clock: 1717421541.939964799
	I0603 13:32:21.944918   15052 fix.go:229] Guest: 2024-06-03 13:32:21.939964799 +0000 UTC Remote: 2024-06-03 13:32:16.9086259 +0000 UTC m=+560.095810701 (delta=5.031338899s)
	I0603 13:32:21.945005   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:26.854859   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:26.855012   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:26.860603   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:26.861383   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:26.861383   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421541
	I0603 13:32:27.017953   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:32:21 UTC 2024
	
	I0603 13:32:27.017953   15052 fix.go:236] clock set: Mon Jun  3 13:32:21 UTC 2024
	 (err=<nil>)
	I0603 13:32:27.017953   15052 start.go:83] releasing machines lock for "ha-149700-m03", held for 2m18.9017639s
	I0603 13:32:27.017953   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:29.236678   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:29.237477   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:29.237477   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:31.863157   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:31.863157   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:31.866636   15052 out.go:177] * Found network options:
	I0603 13:32:31.869974   15052 out.go:177]   - NO_PROXY=172.22.153.250,172.22.154.57
	W0603 13:32:31.872093   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.872093   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:32:31.874949   15052 out.go:177]   - NO_PROXY=172.22.153.250,172.22.154.57
	W0603 13:32:31.877914   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.877914   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.879468   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.879543   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:32:31.882419   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:32:31.882480   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:31.892926   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:32:31.892926   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:34.143902   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:34.144175   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:34.144175   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:34.165181   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:34.166103   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:34.166103   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:37.003054   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:37.003333   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:37.003570   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:37.028972   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:37.028972   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:37.029621   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:37.158511   15052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2655414s)
	W0603 13:32:37.158677   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:32:37.158677   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.276215s)
	I0603 13:32:37.171169   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:32:37.200165   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:32:37.200301   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:32:37.200505   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:32:37.250315   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:32:37.283316   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:32:37.304197   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:32:37.316443   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:32:37.348762   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:32:37.381957   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:32:37.413995   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:32:37.451388   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:32:37.486007   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:32:37.518651   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:32:37.552843   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:32:37.586730   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:32:37.619410   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:32:37.651691   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:37.863545   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:32:37.896459   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:32:37.911973   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:32:37.956554   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:32:37.992217   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:32:38.037960   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:32:38.075746   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:32:38.113079   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:32:38.177594   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:32:38.201897   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:32:38.247850   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:32:38.264863   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:32:38.281720   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:32:38.325611   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:32:38.536285   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:32:38.728593   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:32:38.728675   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:32:38.773321   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:38.998449   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:32:41.538132   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396621s)
	I0603 13:32:41.553586   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:32:41.595738   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:32:41.635351   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:32:41.855171   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:32:42.062671   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:42.277851   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:32:42.322829   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:32:42.361039   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:42.578360   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:32:42.691063   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:32:42.703351   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:32:42.712429   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:32:42.725300   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:32:42.743190   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:32:42.800669   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:32:42.810062   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:32:42.858169   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:32:42.893587   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:32:42.895978   15052 out.go:177]   - env NO_PROXY=172.22.153.250
	I0603 13:32:42.899442   15052 out.go:177]   - env NO_PROXY=172.22.153.250,172.22.154.57
	I0603 13:32:42.902734   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:32:42.910159   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:32:42.910159   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:32:42.922073   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:32:42.931128   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:32:42.956422   15052 mustload.go:65] Loading cluster: ha-149700
	I0603 13:32:42.957191   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:32:42.957985   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:45.161140   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:45.161349   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:45.161349   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:32:45.163803   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.150.43
	I0603 13:32:45.163803   15052 certs.go:194] generating shared ca certs ...
	I0603 13:32:45.163803   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.164383   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:32:45.164919   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:32:45.165144   15052 certs.go:256] generating profile certs ...
	I0603 13:32:45.165285   15052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:32:45.165285   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9
	I0603 13:32:45.165285   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.154.57 172.22.150.43 172.22.159.254]
	I0603 13:32:45.425427   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 ...
	I0603 13:32:45.425427   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9: {Name:mke9e0949185c0a71159b79a255f9c85fc9b5e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.426411   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9 ...
	I0603 13:32:45.426411   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9: {Name:mkeb05129fdadc43e68981aff8b83abf95ceefd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.427443   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:32:45.438963   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:32:45.441103   15052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:32:45.441103   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:32:45.441334   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:32:45.441518   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:32:45.441698   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:32:45.442585   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:32:45.442857   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:32:45.442857   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:32:45.443436   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:32:45.443630   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:32:45.443630   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:32:45.444161   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:32:45.444479   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:32:45.444479   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:32:45.445082   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:45.445263   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:32:45.445534   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:47.698840   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:47.698840   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:47.698928   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:50.381834   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:32:50.381834   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:50.382427   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:32:50.487975   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 13:32:50.496255   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 13:32:50.532002   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 13:32:50.543071   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 13:32:50.578560   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 13:32:50.586361   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 13:32:50.619126   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 13:32:50.624623   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 13:32:50.661168   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 13:32:50.668623   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 13:32:50.701188   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 13:32:50.707337   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 13:32:50.727851   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:32:50.779098   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:32:50.830009   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:32:50.877439   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:32:50.931615   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 13:32:50.980919   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:32:51.026832   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:32:51.077131   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:32:51.132545   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:32:51.181374   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:32:51.230234   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:32:51.279831   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 13:32:51.313071   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 13:32:51.349063   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 13:32:51.384805   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 13:32:51.426131   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 13:32:51.464842   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 13:32:51.502127   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 13:32:51.551845   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:32:51.574248   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:32:51.607281   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.616423   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.630094   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.652617   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:32:51.685805   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:32:51.720925   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.728239   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.743704   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.766385   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:32:51.800222   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:32:51.833265   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.840489   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.853789   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.875679   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:32:51.910153   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:32:51.918378   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:32:51.918700   15052 kubeadm.go:928] updating node {m03 172.22.150.43 8443 v1.30.1 docker true true} ...
	I0603 13:32:51.918700   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.150.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:32:51.918700   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:32:51.931248   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:32:51.960322   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:32:51.960487   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:32:51.973317   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:32:51.996573   15052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 13:32:52.009688   15052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 13:32:52.027813   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 13:32:52.027813   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:32:52.044153   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:32:52.044430   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:32:52.045057   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:32:52.069046   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 13:32:52.069134   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 13:32:52.069250   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:32:52.069250   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 13:32:52.069250   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 13:32:52.086385   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:32:52.132720   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 13:32:52.132720   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 13:32:53.429103   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 13:32:53.453714   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 13:32:53.492071   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:32:53.525152   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 13:32:53.572452   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:32:53.579592   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:32:53.623744   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:53.844290   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:32:53.876045   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:32:53.876673   15052 start.go:316] joinCluster: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:32:53.876673   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 13:32:53.877487   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:58.743376   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:32:58.743467   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:58.743467   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:32:58.979028   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1022775s)
	I0603 13:32:58.979096   15052 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:32:58.979173   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oazovl.bojr37tgui3yqu3q --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m03 --control-plane --apiserver-advertise-address=172.22.150.43 --apiserver-bind-port=8443"
	I0603 13:33:44.496870   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oazovl.bojr37tgui3yqu3q --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m03 --control-plane --apiserver-advertise-address=172.22.150.43 --apiserver-bind-port=8443": (45.5172039s)
	I0603 13:33:44.496988   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 13:33:45.382234   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700-m03 minikube.k8s.io/updated_at=2024_06_03T13_33_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=false
	I0603 13:33:45.554480   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-149700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 13:33:45.703262   15052 start.go:318] duration metric: took 51.8261692s to joinCluster
	I0603 13:33:45.703461   15052 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:33:45.703761   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:33:45.708788   15052 out.go:177] * Verifying Kubernetes components...
	I0603 13:33:45.727224   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:33:46.177445   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:33:46.215937   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:33:46.216755   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 13:33:46.216955   15052 kubeadm.go:477] Overriding stale ClientConfig host https://172.22.159.254:8443 with https://172.22.153.250:8443
	I0603 13:33:46.217874   15052 node_ready.go:35] waiting up to 6m0s for node "ha-149700-m03" to be "Ready" ...
	I0603 13:33:46.217874   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:46.217874   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:46.217874   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:46.217874   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:46.232804   15052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 13:33:46.723993   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:46.724074   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:46.724074   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:46.724074   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:46.728523   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:47.228685   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:47.228953   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:47.228953   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:47.228953   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:47.238208   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:33:47.719946   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:47.719946   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:47.720021   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:47.720021   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:47.725222   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:48.221181   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:48.221181   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:48.221181   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:48.221181   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:48.226645   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:48.227550   15052 node_ready.go:53] node "ha-149700-m03" has status "Ready":"False"
	I0603 13:33:48.728346   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:48.728346   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:48.728346   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:48.728346   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:48.733666   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:49.218876   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:49.218876   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:49.218876   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:49.218876   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:49.223158   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:49.726805   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:49.727069   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:49.727069   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:49.727069   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:49.731537   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:50.228635   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:50.228635   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:50.228967   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:50.228967   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:50.235493   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:50.236263   15052 node_ready.go:53] node "ha-149700-m03" has status "Ready":"False"
	I0603 13:33:50.730586   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:50.730645   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:50.730645   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:50.730645   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:50.735414   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:51.233044   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:51.233197   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:51.233197   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:51.233197   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:51.236446   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:51.727570   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:51.727570   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:51.727708   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:51.727708   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:51.733077   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.225949   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:52.225949   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.225949   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.225949   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.231429   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.731836   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:52.731930   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.731930   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.731930   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.736107   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.738736   15052 node_ready.go:49] node "ha-149700-m03" has status "Ready":"True"
	I0603 13:33:52.738736   15052 node_ready.go:38] duration metric: took 6.5208099s for node "ha-149700-m03" to be "Ready" ...
	I0603 13:33:52.738736   15052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:33:52.738942   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:33:52.738942   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.738942   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.738942   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.764416   15052 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0603 13:33:52.777483   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.777483   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6qmlg
	I0603 13:33:52.777483   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.777483   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.777483   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.782833   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.784169   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.784278   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.784278   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.784331   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.787510   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:52.788704   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.788776   15052 pod_ready.go:81] duration metric: took 11.2215ms for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.788776   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.788899   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ptqqz
	I0603 13:33:52.788939   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.788962   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.788962   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.793266   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.794278   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.794325   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.794378   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.794378   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.801557   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:33:52.804921   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.805081   15052 pod_ready.go:81] duration metric: took 16.3042ms for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.805081   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.805533   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700
	I0603 13:33:52.805533   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.805628   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.805628   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.815090   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:33:52.815881   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.816223   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.816258   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.816258   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.819441   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:52.820361   15052 pod_ready.go:92] pod "etcd-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.820439   15052 pod_ready.go:81] duration metric: took 15.358ms for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.820501   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.820608   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:33:52.820633   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.820672   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.820672   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.826727   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.827290   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:52.827290   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.827290   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.827290   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.831678   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.833057   15052 pod_ready.go:92] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.833119   15052 pod_ready.go:81] duration metric: took 12.6175ms for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.833119   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.936553   15052 request.go:629] Waited for 103.1539ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:52.936736   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:52.936736   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.936736   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.936736   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.940852   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.140964   15052 request.go:629] Waited for 197.7085ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.141034   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.141118   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.141118   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.141118   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.146081   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.346290   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:53.346644   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.346644   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.346644   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.351303   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.532425   15052 request.go:629] Waited for 179.7093ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.532582   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.532731   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.532766   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.532766   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.537962   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:53.847270   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:53.847270   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.847270   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.847270   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.851844   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.941243   15052 request.go:629] Waited for 87.66ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.941390   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.941390   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.941390   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.941453   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.947957   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:54.333696   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:54.333771   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.333771   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.333771   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.338723   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:54.340722   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:54.340722   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.340722   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.340722   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.344334   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:54.847538   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:54.847538   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.847538   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.847871   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.853210   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:54.854994   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:54.854994   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.854994   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.854994   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.859298   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:54.860591   15052 pod_ready.go:102] pod "etcd-ha-149700-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 13:33:55.335211   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:55.335211   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.335496   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.335496   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.348871   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:33:55.349970   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:55.349970   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.349970   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.349970   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.353302   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:55.839954   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:55.839954   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.839954   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.839954   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.845120   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:55.846523   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:55.846523   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.846523   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.846523   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.850171   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.344273   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:56.344273   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.344335   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.344335   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.349704   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:56.350668   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:56.350724   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.350724   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.350724   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.354363   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.355598   15052 pod_ready.go:92] pod "etcd-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.355598   15052 pod_ready.go:81] duration metric: took 3.5224505s for pod "etcd-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.355660   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.355660   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700
	I0603 13:33:56.355791   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.355820   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.355820   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.359611   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.360697   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:56.360697   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.360697   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.360697   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.364597   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.365599   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.365599   15052 pod_ready.go:81] duration metric: took 9.9386ms for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.365599   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.533802   15052 request.go:629] Waited for 168.0495ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:33:56.534294   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:33:56.534294   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.534294   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.534294   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.538893   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:56.736410   15052 request.go:629] Waited for 196.2816ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:56.736410   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:56.736603   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.736603   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.736603   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.742600   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:56.743261   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.743261   15052 pod_ready.go:81] duration metric: took 377.6587ms for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.743261   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.939312   15052 request.go:629] Waited for 195.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m03
	I0603 13:33:56.939312   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m03
	I0603 13:33:56.939312   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.939312   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.939312   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.944674   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.141564   15052 request.go:629] Waited for 196.0206ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:57.141745   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:57.141745   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.141745   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.141745   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.146932   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.147556   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.147556   15052 pod_ready.go:81] duration metric: took 404.2916ms for pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.147556   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.332461   15052 request.go:629] Waited for 184.5735ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:33:57.332549   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:33:57.332614   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.332614   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.332614   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.338573   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.536944   15052 request.go:629] Waited for 197.3655ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:57.536944   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:57.536944   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.536944   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.536944   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.540999   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:57.540999   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.540999   15052 pod_ready.go:81] duration metric: took 393.4403ms for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.540999   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.740879   15052 request.go:629] Waited for 199.878ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:33:57.741102   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:33:57.741102   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.741102   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.741102   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.746898   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.945550   15052 request.go:629] Waited for 198.4235ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:57.945677   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:57.945677   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.945677   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.945766   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.951357   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.952195   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.952195   15052 pod_ready.go:81] duration metric: took 411.1929ms for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.952751   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.134177   15052 request.go:629] Waited for 181.271ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m03
	I0603 13:33:58.134264   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m03
	I0603 13:33:58.134264   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.134264   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.134470   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.139278   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:58.336533   15052 request.go:629] Waited for 196.0116ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:58.336774   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:58.336774   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.336858   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.336858   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.343168   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:58.343936   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:58.344023   15052 pod_ready.go:81] duration metric: took 391.2687ms for pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.344107   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.540487   15052 request.go:629] Waited for 196.3086ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:33:58.540877   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:33:58.540877   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.540877   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.540877   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.549835   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:33:58.743720   15052 request.go:629] Waited for 192.483ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:58.743991   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:58.744115   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.744187   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.744187   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.749547   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:58.751499   15052 pod_ready.go:92] pod "kube-proxy-9wjpn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:58.751600   15052 pod_ready.go:81] duration metric: took 407.3888ms for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.751600   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pvnfv" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.946791   15052 request.go:629] Waited for 194.9025ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvnfv
	I0603 13:33:58.947026   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvnfv
	I0603 13:33:58.947026   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.947026   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.947163   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.951484   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.135887   15052 request.go:629] Waited for 182.1945ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:59.135887   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:59.135887   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.136156   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.136191   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.141375   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.142452   15052 pod_ready.go:92] pod "kube-proxy-pvnfv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.142452   15052 pod_ready.go:81] duration metric: took 390.8489ms for pod "kube-proxy-pvnfv" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.142452   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.339259   15052 request.go:629] Waited for 196.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:33:59.339259   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:33:59.339259   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.339259   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.339259   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.343217   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:59.545562   15052 request.go:629] Waited for 200.6889ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:59.545819   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:59.545819   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.545819   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.545892   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.550573   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.551709   15052 pod_ready.go:92] pod "kube-proxy-vbzvt" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.551778   15052 pod_ready.go:81] duration metric: took 409.2254ms for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.551778   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.735623   15052 request.go:629] Waited for 183.5049ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:33:59.735623   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:33:59.735869   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.735869   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.735869   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.742243   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:59.942917   15052 request.go:629] Waited for 199.9159ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:59.942917   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:59.942917   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.942917   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.942917   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.956085   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:33:59.956783   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.956869   15052 pod_ready.go:81] duration metric: took 405.0877ms for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.956899   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.147232   15052 request.go:629] Waited for 190.1461ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:34:00.147640   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:34:00.147640   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.147640   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.147780   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.153075   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:00.335010   15052 request.go:629] Waited for 180.1376ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:34:00.335214   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:34:00.335214   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.335214   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.335214   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.339598   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:00.341105   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:34:00.341105   15052 pod_ready.go:81] duration metric: took 384.202ms for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.341105   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.537741   15052 request.go:629] Waited for 196.6347ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m03
	I0603 13:34:00.537741   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m03
	I0603 13:34:00.537741   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.537741   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.537741   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.542743   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:00.738683   15052 request.go:629] Waited for 194.3897ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:34:00.738909   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:34:00.738909   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.739035   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.739035   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.743214   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:00.744846   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:34:00.744916   15052 pod_ready.go:81] duration metric: took 403.8078ms for pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.744916   15052 pod_ready.go:38] duration metric: took 8.0061142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:34:00.745033   15052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:34:00.757859   15052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:34:00.788501   15052 api_server.go:72] duration metric: took 15.0847816s to wait for apiserver process to appear ...
	I0603 13:34:00.788501   15052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:34:00.788577   15052 api_server.go:253] Checking apiserver healthz at https://172.22.153.250:8443/healthz ...
	I0603 13:34:00.798814   15052 api_server.go:279] https://172.22.153.250:8443/healthz returned 200:
	ok
	I0603 13:34:00.799227   15052 round_trippers.go:463] GET https://172.22.153.250:8443/version
	I0603 13:34:00.799227   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.799227   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.799227   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.800430   15052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 13:34:00.800977   15052 api_server.go:141] control plane version: v1.30.1
	I0603 13:34:00.801059   15052 api_server.go:131] duration metric: took 12.4813ms to wait for apiserver health ...
	I0603 13:34:00.801093   15052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:34:00.940914   15052 request.go:629] Waited for 139.6746ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:00.941015   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:00.941165   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.941165   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.941165   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.952052   15052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 13:34:00.962948   15052 system_pods.go:59] 24 kube-system pods found
	I0603 13:34:00.962948   15052 system_pods.go:61] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700-m03" [ff62797d-c9d4-4355-8357-9c8682ac707e] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-v4w4l" [3df37f74-f7b9-43c1-854b-38ab7224fc66] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700-m03" [290fcfac-d887-4444-b19c-2662b0e2cdf0] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m03" [9fe1e19c-fd2d-48fe-8fda-7e327c91cabb] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-pvnfv" [6daa679a-0264-4142-9ecb-a87d769db00b] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700-m03" [d3bec3fd-3af2-4551-96b6-7fdffd794600] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700-m03" [0c108f8d-1b10-466e-b210-7ef8a84bc9c2] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:34:00.962948   15052 system_pods.go:74] duration metric: took 161.8538ms to wait for pod list to return data ...
	I0603 13:34:00.962948   15052 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:34:01.144741   15052 request.go:629] Waited for 181.0052ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:34:01.144741   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:34:01.144741   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.144741   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.144741   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.149371   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:01.149908   15052 default_sa.go:45] found service account: "default"
	I0603 13:34:01.150032   15052 default_sa.go:55] duration metric: took 186.9583ms for default service account to be created ...
	I0603 13:34:01.150032   15052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:34:01.346348   15052 request.go:629] Waited for 196.2316ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:01.346587   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:01.346587   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.346587   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.346675   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.360179   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:34:01.370303   15052 system_pods.go:86] 24 kube-system pods found
	I0603 13:34:01.370303   15052 system_pods.go:89] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700-m03" [ff62797d-c9d4-4355-8357-9c8682ac707e] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-v4w4l" [3df37f74-f7b9-43c1-854b-38ab7224fc66] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700-m03" [290fcfac-d887-4444-b19c-2662b0e2cdf0] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m03" [9fe1e19c-fd2d-48fe-8fda-7e327c91cabb] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-pvnfv" [6daa679a-0264-4142-9ecb-a87d769db00b] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700-m03" [d3bec3fd-3af2-4551-96b6-7fdffd794600] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700-m03" [0c108f8d-1b10-466e-b210-7ef8a84bc9c2] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:34:01.370303   15052 system_pods.go:126] duration metric: took 220.2695ms to wait for k8s-apps to be running ...
	I0603 13:34:01.370303   15052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:34:01.381898   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:34:01.412160   15052 system_svc.go:56] duration metric: took 41.8565ms WaitForService to wait for kubelet
	I0603 13:34:01.412160   15052 kubeadm.go:576] duration metric: took 15.7084362s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:34:01.412160   15052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:34:01.536385   15052 request.go:629] Waited for 124.1383ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes
	I0603 13:34:01.536555   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes
	I0603 13:34:01.536616   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.536616   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.536616   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.541875   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:01.544213   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544452   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544452   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544452   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544452   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544576   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544576   15052 node_conditions.go:105] duration metric: took 132.4149ms to run NodePressure ...
	I0603 13:34:01.544576   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:34:01.544646   15052 start.go:254] writing updated cluster config ...
	I0603 13:34:01.557345   15052 ssh_runner.go:195] Run: rm -f paused
	I0603 13:34:01.694652   15052 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:34:01.699803   15052 out.go:177] * Done! kubectl is now configured to use "ha-149700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 13:26:26 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:26:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3e2e2177b00f4eb37ebc89dcc8a42c167af66ae2e367e30888f09742eb0c8a9/resolv.conf as [nameserver 172.22.144.1]"
	Jun 03 13:26:26 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:26:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/592e41948e3a8d10d61900386439c92f5c2efa218ac89b4292e1f0144d081c73/resolv.conf as [nameserver 172.22.144.1]"
	Jun 03 13:26:26 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:26:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac5843b6695175b5c0547fa28b499bdc5c40b9757715976b1536d2eec47b4533/resolv.conf as [nameserver 172.22.144.1]"
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.451936006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.452335409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.452450710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.452813014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530384826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530635529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530657529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530780830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.636458701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.636643803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.639348128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.639622530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642624291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642961394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642990295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.644048705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/33aa4a5311373dc2b150f88764a0d251bc06a7e18caaf64acaa73130d94006cc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 13:34:42 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:34:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593358002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593482403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593524903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593685104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e2286192dae0b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   33aa4a5311373       busybox-fc5497c4f-4hfj7
	d1e8355be36fb       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   ac5843b669517       coredns-7db6d8ff4d-ptqqz
	8cad5b34eaa07       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   592e41948e3a8       storage-provisioner
	e405991670c39       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   f3e2e2177b00f       coredns-7db6d8ff4d-6qmlg
	139823d9d8d4c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   d3d6215383bcd       kindnet-qphhc
	4879852b10da4       747097150317f                                                                                         9 minutes ago        Running             kube-proxy                0                   20f17f2b0d4dc       kube-proxy-9wjpn
	7a4ce070a4434       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     9 minutes ago        Running             kube-vip                  0                   67eb75fc9ff2f       kube-vip-ha-149700
	f9a72751b1c60       91be940803172                                                                                         9 minutes ago        Running             kube-apiserver            0                   9169f118d9b08       kube-apiserver-ha-149700
	962282ca80621       a52dc94f0a912                                                                                         9 minutes ago        Running             kube-scheduler            0                   0e10627407c81       kube-scheduler-ha-149700
	b491f438ec2f5       25a1387cdab82                                                                                         9 minutes ago        Running             kube-controller-manager   0                   8ae2f97837c54       kube-controller-manager-ha-149700
	108f442a1dae5       3861cfcd7c04c                                                                                         9 minutes ago        Running             etcd                      0                   c6193e9dd3f2e       etcd-ha-149700
	
	
	==> coredns [d1e8355be36f] <==
	[INFO] 10.244.1.2:36387 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.257424127s
	[INFO] 10.244.0.4:47951 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.415641726s
	[INFO] 10.244.0.4:44854 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158701s
	[INFO] 10.244.0.4:41440 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249501s
	[INFO] 10.244.2.2:37444 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000102001s
	[INFO] 10.244.2.2:57308 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126301s
	[INFO] 10.244.2.2:50804 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010408158s
	[INFO] 10.244.2.2:47435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001106s
	[INFO] 10.244.2.2:60556 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111401s
	[INFO] 10.244.1.2:35827 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079801s
	[INFO] 10.244.1.2:41409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068401s
	[INFO] 10.244.1.2:51750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000057s
	[INFO] 10.244.1.2:54386 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182001s
	[INFO] 10.244.0.4:48087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114501s
	[INFO] 10.244.2.2:42711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071601s
	[INFO] 10.244.2.2:51380 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224401s
	[INFO] 10.244.1.2:47146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203001s
	[INFO] 10.244.0.4:44145 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114101s
	[INFO] 10.244.0.4:52464 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345502s
	[INFO] 10.244.2.2:35477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001039s
	[INFO] 10.244.2.2:53416 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063601s
	[INFO] 10.244.1.2:58374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215301s
	[INFO] 10.244.1.2:55393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271402s
	[INFO] 10.244.1.2:59612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165301s
	[INFO] 10.244.1.2:46193 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000253301s
	
	
	==> coredns [e405991670c3] <==
	[INFO] 10.244.1.2:46389 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	[INFO] 10.244.0.4:33523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202701s
	[INFO] 10.244.0.4:40321 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208601s
	[INFO] 10.244.0.4:59204 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029309764s
	[INFO] 10.244.0.4:35216 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113201s
	[INFO] 10.244.0.4:43236 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001976811s
	[INFO] 10.244.2.2:48741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133401s
	[INFO] 10.244.2.2:39388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191301s
	[INFO] 10.244.2.2:55892 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185401s
	[INFO] 10.244.1.2:60903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137601s
	[INFO] 10.244.1.2:51322 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005000729s
	[INFO] 10.244.1.2:46958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000053s
	[INFO] 10.244.1.2:53810 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.4:33768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145401s
	[INFO] 10.244.0.4:51440 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001451s
	[INFO] 10.244.0.4:44295 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000798s
	[INFO] 10.244.2.2:51082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179701s
	[INFO] 10.244.2.2:37686 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053s
	[INFO] 10.244.1.2:51508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191801s
	[INFO] 10.244.1.2:39529 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064801s
	[INFO] 10.244.1.2:39194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100101s
	[INFO] 10.244.0.4:43140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104701s
	[INFO] 10.244.0.4:33173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000352202s
	[INFO] 10.244.2.2:44233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192601s
	[INFO] 10.244.2.2:41640 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000391602s
	
	
	==> describe nodes <==
	Name:               ha-149700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_26_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:25:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:35:01 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:35:01 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:35:01 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:35:01 +0000   Mon, 03 Jun 2024 13:26:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.153.250
	  Hostname:    ha-149700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 94538886f55f4cbdb7bcdf9f8a4de860
	  System UUID:                d42864a6-608c-2a4a-b3c1-27f966e2091d
	  Boot ID:                    f47c949f-9fae-4529-afa5-365efb5bd803
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hfj7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-6qmlg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m33s
	  kube-system                 coredns-7db6d8ff4d-ptqqz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m33s
	  kube-system                 etcd-ha-149700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m46s
	  kube-system                 kindnet-qphhc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m34s
	  kube-system                 kube-apiserver-ha-149700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-controller-manager-ha-149700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-proxy-9wjpn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 kube-scheduler-ha-149700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 kube-vip-ha-149700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m32s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m56s (x7 over 9m56s)  kubelet          Node ha-149700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m56s (x8 over 9m56s)  kubelet          Node ha-149700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x8 over 9m56s)  kubelet          Node ha-149700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m46s                  kubelet          Node ha-149700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m46s                  kubelet          Node ha-149700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m46s                  kubelet          Node ha-149700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m34s                  node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	  Normal  NodeReady                9m21s                  kubelet          Node ha-149700 status is now: NodeReady
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	
	
	Name:               ha-149700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_29_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:34:52 +0000   Mon, 03 Jun 2024 13:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:34:52 +0000   Mon, 03 Jun 2024 13:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:34:52 +0000   Mon, 03 Jun 2024 13:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:34:52 +0000   Mon, 03 Jun 2024 13:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.154.57
	  Hostname:    ha-149700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6ae065fc4f949549aef64be5ac14c55
	  System UUID:                0944961d-e844-8341-bc02-bc74b0797070
	  Boot ID:                    71ed6a23-125e-422f-b4c4-85b45c319b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vzbnc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-149700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m58s
	  kube-system                 kindnet-l2cph                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-149700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-controller-manager-ha-149700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-proxy-vbzvt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-149700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-vip-ha-149700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node ha-149700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node ha-149700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node ha-149700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m59s                node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	  Normal  RegisteredNode           5m40s                node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	  Normal  RegisteredNode           107s                 node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	
	
	Name:               ha-149700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_33_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:33:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:35:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:35:09 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:35:09 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:35:09 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:35:09 +0000   Mon, 03 Jun 2024 13:33:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.150.43
	  Hostname:    ha-149700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e056ce9e2ad145808a2a175e96b6ed65
	  System UUID:                afbef1cc-fa5e-564f-9694-5a0a2250e53c
	  Boot ID:                    a6517f5c-10bf-400e-bc82-3672ccf32932
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fkkts                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-149700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m5s
	  kube-system                 kindnet-v4w4l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m9s
	  kube-system                 kube-apiserver-ha-149700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-controller-manager-ha-149700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-proxy-pvnfv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-scheduler-ha-149700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-vip-ha-149700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-149700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-149700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-149700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                 node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	  Normal  RegisteredNode           2m4s                 node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	  Normal  RegisteredNode           107s                 node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	
	
	==> dmesg <==
	[Jun 3 13:24] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.226516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +47.294251] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.166575] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Jun 3 13:25] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.096039] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.504710] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.190765] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.209889] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +2.758049] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.180393] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.184217] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.248547] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[ +11.490901] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.095424] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.377453] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +5.250046] systemd-fstab-generator[1698]: Ignoring "noauto" option for root device
	[  +0.102640] kauditd_printk_skb: 73 callbacks suppressed
	[ +10.163728] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.141800] kauditd_printk_skb: 72 callbacks suppressed
	[Jun 3 13:26] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.688628] kauditd_printk_skb: 29 callbacks suppressed
	[Jun 3 13:29] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [108f442a1dae] <==
	{"level":"warn","ts":"2024-06-03T13:33:42.562108Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"1d81fdd12c153b25","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"34.357894ms"}
	{"level":"warn","ts":"2024-06-03T13:33:42.653879Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1d81fdd12c153b25","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-03T13:33:42.685521Z","caller":"traceutil/trace.go:171","msg":"trace[1965425617] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"319.910532ms","start":"2024-06-03T13:33:42.365591Z","end":"2024-06-03T13:33:42.685501Z","steps":["trace[1965425617] 'process raft request'  (duration: 319.68853ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:33:42.68567Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:33:42.365574Z","time spent":"320.011933ms","remote":"127.0.0.1:58258","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:1452 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-06-03T13:33:42.785813Z","caller":"traceutil/trace.go:171","msg":"trace[1362850241] linearizableReadLoop","detail":"{readStateIndex:1719; appliedIndex:1720; }","duration":"389.850517ms","start":"2024-06-03T13:33:42.395945Z","end":"2024-06-03T13:33:42.785796Z","steps":["trace[1362850241] 'read index received'  (duration: 389.844717ms)","trace[1362850241] 'applied index is now lower than readState.Index'  (duration: 4.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:33:42.823906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"427.92399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:33:42.823957Z","caller":"traceutil/trace.go:171","msg":"trace[355508714] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1549; }","duration":"428.004091ms","start":"2024-06-03T13:33:42.395939Z","end":"2024-06-03T13:33:42.823943Z","steps":["trace[355508714] 'agreement among raft nodes before linearized reading'  (duration: 390.07052ms)","trace[355508714] 'range keys from in-memory index tree'  (duration: 37.723169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:33:42.824241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:33:42.395861Z","time spent":"428.367594ms","remote":"127.0.0.1:57850","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-06-03T13:33:42.824441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.175402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-149700-m03\" ","response":"range_response_count:1 size:3318"}
	{"level":"info","ts":"2024-06-03T13:33:42.824479Z","caller":"traceutil/trace.go:171","msg":"trace[573284436] range","detail":"{range_begin:/registry/minions/ha-149700-m03; range_end:; response_count:1; response_revision:1551; }","duration":"286.289203ms","start":"2024-06-03T13:33:42.53818Z","end":"2024-06-03T13:33:42.824469Z","steps":["trace[573284436] 'agreement among raft nodes before linearized reading'  (duration: 286.204602ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:33:42.824565Z","caller":"traceutil/trace.go:171","msg":"trace[392015447] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"299.607834ms","start":"2024-06-03T13:33:42.524949Z","end":"2024-06-03T13:33:42.824557Z","steps":["trace[392015447] 'process raft request'  (duration: 299.288631ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:33:42.824618Z","caller":"traceutil/trace.go:171","msg":"trace[1699037106] transaction","detail":"{read_only:false; response_revision:1550; number_of_response:1; }","duration":"302.121958ms","start":"2024-06-03T13:33:42.522485Z","end":"2024-06-03T13:33:42.824607Z","steps":["trace[1699037106] 'process raft request'  (duration: 263.48408ms)","trace[1699037106] 'compare'  (duration: 37.953872ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:33:42.824671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:33:42.52245Z","time spent":"302.186759ms","remote":"127.0.0.1:57908","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":674,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/ha-149700-m03.17d581ddcbbd484e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/ha-149700-m03.17d581ddcbbd484e\" value_size:601 lease:5974183090545200781 >> failure:<>"}
	{"level":"warn","ts":"2024-06-03T13:33:43.671147Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1d81fdd12c153b25","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-03T13:33:44.161239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1e86fbc2f15d2e8 switched to configuration voters=(2126259573925165861 11666697688737764072 18182120031908823370)"}
	{"level":"info","ts":"2024-06-03T13:33:44.161712Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"8f0f70399c160902","local-member-id":"a1e86fbc2f15d2e8"}
	{"level":"info","ts":"2024-06-03T13:33:44.162076Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"a1e86fbc2f15d2e8","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"1d81fdd12c153b25"}
	{"level":"warn","ts":"2024-06-03T13:33:44.497239Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"fc53ddd60570814a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.760584ms"}
	{"level":"warn","ts":"2024-06-03T13:33:44.497423Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"1d81fdd12c153b25","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.949586ms"}
	{"level":"info","ts":"2024-06-03T13:33:44.498329Z","caller":"traceutil/trace.go:171","msg":"trace[18023918] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"195.070709ms","start":"2024-06-03T13:33:44.303243Z","end":"2024-06-03T13:33:44.498314Z","steps":["trace[18023918] 'process raft request'  (duration: 194.914308ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:33:44.499163Z","caller":"traceutil/trace.go:171","msg":"trace[1240231311] linearizableReadLoop","detail":"{readStateIndex:1734; appliedIndex:1735; }","duration":"103.718315ms","start":"2024-06-03T13:33:44.395435Z","end":"2024-06-03T13:33:44.499153Z","steps":["trace[1240231311] 'read index received'  (duration: 103.715015ms)","trace[1240231311] 'applied index is now lower than readState.Index'  (duration: 2.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:33:44.499284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.833416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:33:44.499378Z","caller":"traceutil/trace.go:171","msg":"trace[786947766] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1560; }","duration":"103.957417ms","start":"2024-06-03T13:33:44.395412Z","end":"2024-06-03T13:33:44.499369Z","steps":["trace[786947766] 'agreement among raft nodes before linearized reading'  (duration: 103.832116ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:33:50.147942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.916755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:433"}
	{"level":"info","ts":"2024-06-03T13:33:50.14885Z","caller":"traceutil/trace.go:171","msg":"trace[1629296501] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1582; }","duration":"109.077966ms","start":"2024-06-03T13:33:50.039758Z","end":"2024-06-03T13:33:50.148836Z","steps":["trace[1629296501] 'range keys from in-memory index tree'  (duration: 106.689442ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:35:46 up 11 min,  0 users,  load average: 0.82, 0.66, 0.39
	Linux ha-149700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [139823d9d8d4] <==
	I0603 13:35:02.401315       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:35:12.417852       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:35:12.417962       1 main.go:227] handling current node
	I0603 13:35:12.417978       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:35:12.417986       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:35:12.418277       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:35:12.418447       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:35:22.428105       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:35:22.428265       1 main.go:227] handling current node
	I0603 13:35:22.428299       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:35:22.428307       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:35:22.428736       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:35:22.428772       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:35:32.445923       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:35:32.445970       1 main.go:227] handling current node
	I0603 13:35:32.445983       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:35:32.445990       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:35:32.446502       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:35:32.446575       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:35:42.463745       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:35:42.463830       1 main.go:227] handling current node
	I0603 13:35:42.463847       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:35:42.463855       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:35:42.464568       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:35:42.464655       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f9a72751b1c6] <==
	I0603 13:26:00.673094       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:26:00.729578       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 13:26:00.766413       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:26:12.498378       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 13:26:12.841563       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0603 13:33:38.541662       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.9µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0603 13:33:38.549938       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 13:33:38.554100       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 13:33:38.558412       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 13:33:38.559120       1 timeout.go:142] post-timeout activity - time-elapsed: 114.25692ms, PATCH "/api/v1/namespaces/default/events/ha-149700-m03.17d581dca38320b3" result: <nil>
	E0603 13:34:46.296264       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61870: use of closed network connection
	E0603 13:34:46.824780       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61872: use of closed network connection
	E0603 13:34:48.631001       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61874: use of closed network connection
	E0603 13:34:49.604728       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61876: use of closed network connection
	E0603 13:34:50.139760       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61878: use of closed network connection
	E0603 13:34:50.680814       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61880: use of closed network connection
	E0603 13:34:51.225296       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61882: use of closed network connection
	E0603 13:34:51.750632       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61884: use of closed network connection
	E0603 13:34:52.268301       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61886: use of closed network connection
	E0603 13:34:53.177135       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61890: use of closed network connection
	E0603 13:35:03.720131       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61892: use of closed network connection
	E0603 13:35:04.229291       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61895: use of closed network connection
	E0603 13:35:14.735704       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61897: use of closed network connection
	E0603 13:35:15.253550       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61900: use of closed network connection
	E0603 13:35:25.799651       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61902: use of closed network connection
	
	
	==> kube-controller-manager [b491f438ec2f] <==
	I0603 13:26:27.395810       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 13:26:27.407490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.702µs"
	I0603 13:26:27.473137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.940522ms"
	I0603 13:26:27.475616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.001µs"
	I0603 13:26:27.531238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.725476ms"
	I0603 13:26:27.532973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.801µs"
	I0603 13:29:45.604979       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-149700-m02\" does not exist"
	I0603 13:29:45.627650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-149700-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:29:47.436614       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-149700-m02"
	I0603 13:33:37.611878       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-149700-m03\" does not exist"
	I0603 13:33:37.633390       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-149700-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:33:42.828162       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-149700-m03"
	I0603 13:34:39.604426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.116437ms"
	I0603 13:34:39.751644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="146.665316ms"
	I0603 13:34:40.092809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="341.069592ms"
	I0603 13:34:40.334023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="241.053626ms"
	I0603 13:34:40.402709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.155648ms"
	I0603 13:34:40.402815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.2µs"
	I0603 13:34:40.803526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.001µs"
	I0603 13:34:42.914656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.353807ms"
	I0603 13:34:42.915280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.601µs"
	I0603 13:34:43.053326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.748399ms"
	I0603 13:34:43.054254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.601µs"
	I0603 13:34:43.451459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.034606ms"
	I0603 13:34:43.452038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.801µs"
	
	
	==> kube-proxy [4879852b10da] <==
	I0603 13:26:14.358495       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:26:14.373061       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.153.250"]
	I0603 13:26:14.425474       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:26:14.425650       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:26:14.425675       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:26:14.433307       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:26:14.433745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:26:14.434072       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:26:14.435488       1 config.go:192] "Starting service config controller"
	I0603 13:26:14.436145       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:26:14.436725       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:26:14.436983       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:26:14.445276       1 config.go:319] "Starting node config controller"
	I0603 13:26:14.445289       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:26:14.537663       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:26:14.545512       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:26:14.545597       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [962282ca8062] <==
	W0603 13:25:57.166877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.166914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.177724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.177917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.363313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 13:25:57.363982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 13:25:57.368106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:25:57.368158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:25:57.452000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:25:57.452127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 13:25:57.560458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.560721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.568759       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:25:57.569059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:25:57.615976       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:25:57.616025       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:26:00.768329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 13:33:37.757427       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v4w4l\": pod kindnet-v4w4l is already assigned to node \"ha-149700-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-v4w4l" node="ha-149700-m03"
	E0603 13:33:37.759464       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3df37f74-f7b9-43c1-854b-38ab7224fc66(kube-system/kindnet-v4w4l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-v4w4l"
	E0603 13:33:37.759693       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v4w4l\": pod kindnet-v4w4l is already assigned to node \"ha-149700-m03\"" pod="kube-system/kindnet-v4w4l"
	I0603 13:33:37.760020       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v4w4l" node="ha-149700-m03"
	E0603 13:34:39.543023       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vzbnc\": pod busybox-fc5497c4f-vzbnc is already assigned to node \"ha-149700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vzbnc" node="ha-149700-m02"
	E0603 13:34:39.543170       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aef956f6-f05c-45d8-b772-784ff2b201df(default/busybox-fc5497c4f-vzbnc) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-vzbnc"
	E0603 13:34:39.543327       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vzbnc\": pod busybox-fc5497c4f-vzbnc is already assigned to node \"ha-149700-m02\"" pod="default/busybox-fc5497c4f-vzbnc"
	I0603 13:34:39.543593       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vzbnc" node="ha-149700-m02"
	
	
	==> kubelet <==
	Jun 03 13:31:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:31:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:32:00 ha-149700 kubelet[2205]: E0603 13:32:00.849800    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:32:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:32:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:32:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:32:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:33:00 ha-149700 kubelet[2205]: E0603 13:33:00.849121    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:33:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:33:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:33:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:33:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:34:00 ha-149700 kubelet[2205]: E0603 13:34:00.848865    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:34:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:34:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:34:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:34:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:34:39 ha-149700 kubelet[2205]: I0603 13:34:39.614558    2205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=498.614536767 podStartE2EDuration="8m18.614536767s" podCreationTimestamp="2024-06-03 13:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 13:26:27.545438749 +0000 UTC m=+26.999104865" watchObservedRunningTime="2024-06-03 13:34:39.614536767 +0000 UTC m=+519.068202883"
	Jun 03 13:34:39 ha-149700 kubelet[2205]: I0603 13:34:39.616355    2205 topology_manager.go:215] "Topology Admit Handler" podUID="fca8ff2d-26d6-4748-8113-24aa6d6ac555" podNamespace="default" podName="busybox-fc5497c4f-4hfj7"
	Jun 03 13:34:39 ha-149700 kubelet[2205]: I0603 13:34:39.802632    2205 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8cp\" (UniqueName: \"kubernetes.io/projected/fca8ff2d-26d6-4748-8113-24aa6d6ac555-kube-api-access-pc8cp\") pod \"busybox-fc5497c4f-4hfj7\" (UID: \"fca8ff2d-26d6-4748-8113-24aa6d6ac555\") " pod="default/busybox-fc5497c4f-4hfj7"
	Jun 03 13:35:00 ha-149700 kubelet[2205]: E0603 13:35:00.854153    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:35:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:35:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:35:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:35:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:35:38.398309    7780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-149700 -n ha-149700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-149700 -n ha-149700: (12.6877766s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-149700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (94.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 node stop m02 -v=7 --alsologtostderr: (36.1128528s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr: exit status 1 (22.734275s)

                                                
                                                
** stderr ** 
	W0603 13:52:34.208904    3904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 13:52:34.290103    3904 out.go:291] Setting OutFile to fd 1364 ...
	I0603 13:52:34.291339    3904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:52:34.291339    3904 out.go:304] Setting ErrFile to fd 1480...
	I0603 13:52:34.291339    3904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:52:34.308241    3904 out.go:298] Setting JSON to false
	I0603 13:52:34.308241    3904 mustload.go:65] Loading cluster: ha-149700
	I0603 13:52:34.308241    3904 notify.go:220] Checking for updates...
	I0603 13:52:34.309241    3904 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:52:34.309241    3904 status.go:255] checking status of ha-149700 ...
	I0603 13:52:34.309241    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:52:36.581218    3904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:52:36.581218    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:36.581218    3904 status.go:330] ha-149700 host status = "Running" (err=<nil>)
	I0603 13:52:36.581218    3904 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:52:36.581823    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:52:38.857463    3904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:52:38.857907    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:38.858045    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:52:41.534158    3904 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:52:41.534158    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:41.534158    3904 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:52:41.548127    3904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 13:52:41.548127    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:52:43.767082    3904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:52:43.767492    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:43.767570    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:52:46.494092    3904 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:52:46.494092    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:46.495094    3904 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:52:46.594658    3904 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.04649s)
	I0603 13:52:46.609534    3904 ssh_runner.go:195] Run: systemctl --version
	I0603 13:52:46.631471    3904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:52:46.657629    3904 kubeconfig.go:125] found "ha-149700" server: "https://172.22.159.254:8443"
	I0603 13:52:46.657725    3904 api_server.go:166] Checking apiserver status ...
	I0603 13:52:46.668887    3904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:46.709394    3904 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2101/cgroup
	W0603 13:52:46.727850    3904 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2101/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:52:46.739988    3904 ssh_runner.go:195] Run: ls
	I0603 13:52:46.748086    3904 api_server.go:253] Checking apiserver healthz at https://172.22.159.254:8443/healthz ...
	I0603 13:52:46.755525    3904 api_server.go:279] https://172.22.159.254:8443/healthz returned 200:
	ok
	I0603 13:52:46.755525    3904 status.go:422] ha-149700 apiserver status = Running (err=<nil>)
	I0603 13:52:46.755525    3904 status.go:257] ha-149700 status: &{Name:ha-149700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 13:52:46.755745    3904 status.go:255] checking status of ha-149700-m02 ...
	I0603 13:52:46.756198    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:52:48.962388    3904 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 13:52:48.962388    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:48.962642    3904 status.go:330] ha-149700-m02 host status = "Stopped" (err=<nil>)
	I0603 13:52:48.962642    3904 status.go:343] host is not running, skipping remaining checks
	I0603 13:52:48.962642    3904 status.go:257] ha-149700-m02 status: &{Name:ha-149700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 13:52:48.962872    3904 status.go:255] checking status of ha-149700-m03 ...
	I0603 13:52:48.963044    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:52:51.179161    3904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:52:51.180038    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:51.180107    3904 status.go:330] ha-149700-m03 host status = "Running" (err=<nil>)
	I0603 13:52:51.180186    3904 host.go:66] Checking if "ha-149700-m03" exists ...
	I0603 13:52:51.180878    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:52:53.355649    3904 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:52:53.355649    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:53.356153    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:52:55.962512    3904 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:52:55.962969    3904 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:52:55.962969    3904 host.go:66] Checking if "ha-149700-m03" exists ...
	I0603 13:52:55.978437    3904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 13:52:55.978437    3904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-149700 -n ha-149700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-149700 -n ha-149700: (12.4514091s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 logs -n 25: (8.815893s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:47 UTC | 03 Jun 24 13:47 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:47 UTC | 03 Jun 24 13:47 UTC |
	|         | ha-149700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:47 UTC | 03 Jun 24 13:47 UTC |
	|         | ha-149700:/home/docker/cp-test_ha-149700-m03_ha-149700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:47 UTC | 03 Jun 24 13:48 UTC |
	|         | ha-149700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700 sudo cat                                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:48 UTC | 03 Jun 24 13:48 UTC |
	|         | /home/docker/cp-test_ha-149700-m03_ha-149700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:48 UTC | 03 Jun 24 13:48 UTC |
	|         | ha-149700-m02:/home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:48 UTC | 03 Jun 24 13:48 UTC |
	|         | ha-149700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700-m02 sudo cat                                                                                   | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:48 UTC | 03 Jun 24 13:48 UTC |
	|         | /home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:48 UTC | 03 Jun 24 13:49 UTC |
	|         | ha-149700-m04:/home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:49 UTC |
	|         | ha-149700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700-m04 sudo cat                                                                                   | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:49 UTC |
	|         | /home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-149700 cp testdata\cp-test.txt                                                                                         | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:49 UTC |
	|         | ha-149700-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:49 UTC |
	|         | ha-149700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:49 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:49 UTC | 03 Jun 24 13:50 UTC |
	|         | ha-149700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:50 UTC | 03 Jun 24 13:50 UTC |
	|         | ha-149700:/home/docker/cp-test_ha-149700-m04_ha-149700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:50 UTC | 03 Jun 24 13:50 UTC |
	|         | ha-149700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700 sudo cat                                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:50 UTC | 03 Jun 24 13:50 UTC |
	|         | /home/docker/cp-test_ha-149700-m04_ha-149700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:50 UTC | 03 Jun 24 13:51 UTC |
	|         | ha-149700-m02:/home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:51 UTC |
	|         | ha-149700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700-m02 sudo cat                                                                                   | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt                                                                       | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:51 UTC |
	|         | ha-149700-m03:/home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n                                                                                                          | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:51 UTC |
	|         | ha-149700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-149700 ssh -n ha-149700-m03 sudo cat                                                                                   | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-149700 node stop m02 -v=7                                                                                              | ha-149700 | minikube3\jenkins | v1.33.1 | 03 Jun 24 13:51 UTC | 03 Jun 24 13:52 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:22:56
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:22:56.971779   15052 out.go:291] Setting OutFile to fd 1132 ...
	I0603 13:22:56.972464   15052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:22:56.972464   15052 out.go:304] Setting ErrFile to fd 960...
	I0603 13:22:56.972464   15052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:22:56.997789   15052 out.go:298] Setting JSON to false
	I0603 13:22:57.000819   15052 start.go:129] hostinfo: {"hostname":"minikube3","uptime":21905,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 13:22:57.000819   15052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 13:22:57.005553   15052 out.go:177] * [ha-149700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 13:22:57.012713   15052 notify.go:220] Checking for updates...
	I0603 13:22:57.014937   15052 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:22:57.017495   15052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:22:57.020235   15052 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 13:22:57.022881   15052 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:22:57.025391   15052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:22:57.028824   15052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:23:02.588214   15052 out.go:177] * Using the hyperv driver based on user configuration
	I0603 13:23:02.592073   15052 start.go:297] selected driver: hyperv
	I0603 13:23:02.592073   15052 start.go:901] validating driver "hyperv" against <nil>
	I0603 13:23:02.592073   15052 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:23:02.645291   15052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:23:02.646831   15052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:23:02.646905   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:23:02.646997   15052 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 13:23:02.646997   15052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 13:23:02.647201   15052 start.go:340] cluster config:
	{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:23:02.647557   15052 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:23:02.651359   15052 out.go:177] * Starting "ha-149700" primary control-plane node in "ha-149700" cluster
	I0603 13:23:02.655235   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:23:02.655540   15052 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 13:23:02.655603   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:23:02.656037   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:23:02.656195   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:23:02.656854   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:23:02.657015   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json: {Name:mk8cf1b94df5066df9477edea2b9709544c10d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:23:02.657680   15052 start.go:360] acquireMachinesLock for ha-149700: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:23:02.658290   15052 start.go:364] duration metric: took 609.4µs to acquireMachinesLock for "ha-149700"
	I0603 13:23:02.658290   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:23:02.658290   15052 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 13:23:02.661683   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:23:02.661683   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:23:02.661683   15052 client.go:168] LocalClient.Create starting
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:23:02.662681   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:23:02.663685   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:23:02.663685   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:23:02.663685   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:04.831013   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:23:06.600809   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:23:06.600867   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:06.600867   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:23:08.092889   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:23:08.092889   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:08.093065   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:23:11.805594   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:23:11.805803   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:11.808205   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:23:12.301798   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:23:12.600518   15052 main.go:141] libmachine: Creating VM...
	I0603 13:23:12.600890   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:15.505229   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:23:15.505229   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:23:17.256503   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:23:17.257443   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:17.257555   15052 main.go:141] libmachine: Creating VHD
	I0603 13:23:17.257555   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:23:21.000794   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6C3DAB81-D3E4-465D-93E0-487E78DBE9F3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:23:21.000794   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:21.001716   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:23:21.001716   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:23:21.012967   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:23:24.158322   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:24.158322   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:24.158515   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd' -SizeBytes 20000MB
	I0603 13:23:26.700668   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:26.701363   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:26.701497   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:23:30.293322   15052 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-149700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:23:30.293322   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:30.294384   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700 -DynamicMemoryEnabled $false
	I0603 13:23:32.525424   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:32.525424   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:32.525645   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700 -Count 2
	I0603 13:23:34.688121   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:34.688483   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:34.688633   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\boot2docker.iso'
	I0603 13:23:37.322304   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:37.322424   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:37.322424   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\disk.vhd'
	I0603 13:23:39.966344   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:39.966597   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:39.966717   15052 main.go:141] libmachine: Starting VM...
	I0603 13:23:39.966765   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700
	I0603 13:23:43.020412   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:43.021256   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:43.021290   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:23:43.021290   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:45.253602   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:45.253799   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:45.253799   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:47.749527   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:47.749527   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:48.759800   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:50.984916   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:50.984916   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:50.985152   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:53.506570   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:53.507075   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:54.510248   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:23:56.697481   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:23:56.697481   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:23:56.698452   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:23:59.169924   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:23:59.170413   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:00.180091   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:02.485122   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:02.485213   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:02.485213   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:05.008919   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:24:05.008919   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:06.015954   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:08.278081   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:08.278231   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:08.278337   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:10.873641   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:10.874620   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:10.874742   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:13.053090   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:13.054058   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:13.054058   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:24:13.054058   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:15.220841   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:17.761253   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:17.762305   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:17.767870   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:17.778210   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:17.778210   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:24:17.914383   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:24:17.914383   15052 buildroot.go:166] provisioning hostname "ha-149700"
	I0603 13:24:17.914946   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:20.024781   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:22.526550   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:22.526550   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:22.544151   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:22.544845   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:22.544845   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700 && echo "ha-149700" | sudo tee /etc/hostname
	I0603 13:24:22.702323   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700
	
	I0603 13:24:22.702323   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:24.732534   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:24.732534   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:24.743687   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:27.196276   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:27.196276   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:27.212536   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:27.213102   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:27.213102   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:24:27.362606   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:24:27.362606   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:24:27.362606   15052 buildroot.go:174] setting up certificates
	I0603 13:24:27.362606   15052 provision.go:84] configureAuth start
	I0603 13:24:27.363161   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:29.442103   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:29.442103   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:29.454703   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:31.937506   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:31.937506   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:31.948526   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:33.980937   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:33.980937   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:33.992876   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:36.436525   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:36.448333   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:36.448333   15052 provision.go:143] copyHostCerts
	I0603 13:24:36.448535   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:24:36.448870   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:24:36.448946   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:24:36.449366   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:24:36.450675   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:24:36.450993   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:24:36.450993   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:24:36.450993   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:24:36.452060   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:24:36.452060   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:24:36.452642   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:24:36.452958   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:24:36.453823   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700 san=[127.0.0.1 172.22.153.250 ha-149700 localhost minikube]
	I0603 13:24:36.614064   15052 provision.go:177] copyRemoteCerts
	I0603 13:24:36.624718   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:24:36.624718   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:38.699126   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:38.699126   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:38.710513   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:41.106225   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:41.119523   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:41.119797   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:24:41.233934   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.609177s)
	I0603 13:24:41.233934   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:24:41.234564   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:24:41.278632   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:24:41.278632   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 13:24:41.313310   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:24:41.320690   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:24:41.355137   15052 provision.go:87] duration metric: took 13.9924152s to configureAuth
	I0603 13:24:41.355137   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:24:41.362195   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:24:41.362195   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:43.400999   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:43.412033   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:43.412033   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:45.805813   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:45.816423   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:45.822508   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:45.823040   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:45.823220   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:24:45.957871   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:24:45.957955   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:24:45.958142   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:24:45.958221   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:47.989961   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:47.989961   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:47.990052   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:50.381488   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:50.392253   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:50.398456   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:50.399063   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:50.399219   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:24:50.558987   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:24:50.558987   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:52.582144   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:24:54.983560   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:24:54.983560   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:55.000888   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:24:55.001559   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:24:55.001559   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:24:57.148984   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:24:57.148984   15052 machine.go:97] duration metric: took 44.0945608s to provisionDockerMachine
	I0603 13:24:57.148984   15052 client.go:171] duration metric: took 1m54.4863569s to LocalClient.Create
	I0603 13:24:57.148984   15052 start.go:167] duration metric: took 1m54.4863569s to libmachine.API.Create "ha-149700"
	I0603 13:24:57.148984   15052 start.go:293] postStartSetup for "ha-149700" (driver="hyperv")
	I0603 13:24:57.148984   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:24:57.159789   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:24:57.159789   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:24:59.257239   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:24:59.257239   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:24:59.268152   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:01.699585   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:01.710420   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:01.710614   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:01.820812   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6609839s)
	I0603 13:25:01.831267   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:25:01.838997   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:25:01.839090   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:25:01.839542   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:25:01.839917   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:25:01.839917   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:25:01.851988   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:25:01.869309   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:25:01.912588   15052 start.go:296] duration metric: took 4.763564s for postStartSetup
	I0603 13:25:01.915943   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:03.908512   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:03.908512   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:03.919617   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:06.368965   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:06.368965   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:06.379687   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:25:06.383105   15052 start.go:128] duration metric: took 2m3.7237942s to createHost
	I0603 13:25:06.383290   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:08.358784   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:08.358784   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:08.369990   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:10.817917   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:10.817917   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:10.834799   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:25:10.834945   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:25:10.834945   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:25:10.974885   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421110.983953263
	
	I0603 13:25:10.974885   15052 fix.go:216] guest clock: 1717421110.983953263
	I0603 13:25:10.974885   15052 fix.go:229] Guest: 2024-06-03 13:25:10.983953263 +0000 UTC Remote: 2024-06-03 13:25:06.383105 +0000 UTC m=+129.573838201 (delta=4.600848263s)
	I0603 13:25:10.974885   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:13.012725   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:15.451275   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:15.451275   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:15.456697   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:25:15.457543   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.153.250 22 <nil> <nil>}
	I0603 13:25:15.457543   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421110
	I0603 13:25:15.601465   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:25:10 UTC 2024
	
	I0603 13:25:15.602021   15052 fix.go:236] clock set: Mon Jun  3 13:25:10 UTC 2024
	 (err=<nil>)
	I0603 13:25:15.602059   15052 start.go:83] releasing machines lock for "ha-149700", held for 2m12.9426343s
	I0603 13:25:15.602235   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:17.617381   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:17.627940   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:17.627940   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:20.021978   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:20.032664   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:20.037889   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:25:20.038024   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:20.046480   15052 ssh_runner.go:195] Run: cat /version.json
	I0603 13:25:20.046480   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:25:22.198248   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:22.198397   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:22.198397   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:22.206096   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:25:22.206629   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:22.206629   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:25:24.726677   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:24.726677   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:24.737483   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:24.759151   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:25:24.759151   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:25:24.759767   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:25:24.840241   15052 ssh_runner.go:235] Completed: cat /version.json: (4.7894203s)
	I0603 13:25:24.850395   15052 ssh_runner.go:195] Run: systemctl --version
	I0603 13:25:24.948416   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.909545s)
	I0603 13:25:24.960549   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:25:24.968672   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:25:24.979283   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:25:25.004051   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:25:25.004051   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:25:25.004165   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:25:25.046848   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:25:25.087326   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:25:25.106385   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:25:25.116439   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:25:25.150488   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:25:25.183566   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:25:25.214460   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:25:25.243720   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:25:25.272735   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:25:25.303391   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:25:25.334212   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:25:25.365143   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:25:25.394136   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:25:25.420574   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:25.602604   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:25:25.628109   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:25:25.641855   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:25:25.671968   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:25:25.702429   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:25:25.740985   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:25:25.772528   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:25:25.810908   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:25:25.867763   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:25:25.893304   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:25:25.936789   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:25:25.952893   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:25:25.969481   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:25:26.009771   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:25:26.197215   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:25:26.374711   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:25:26.374854   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:25:26.418445   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:26.596522   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:25:29.080378   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4838353s)
	I0603 13:25:29.099783   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:25:29.133358   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:25:29.173998   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:25:29.354108   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:25:29.544867   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:29.719028   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:25:29.755111   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:25:29.791777   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:29.961104   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:25:30.070180   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:25:30.082027   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:25:30.096157   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:25:30.108573   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:25:30.126725   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:25:30.180047   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:25:30.190874   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:25:30.234607   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:25:30.266834   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:25:30.267000   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:25:30.271305   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:25:30.274317   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:25:30.274317   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:25:30.286678   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:25:30.289113   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:25:30.326570   15052 kubeadm.go:877] updating cluster {Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:25:30.326570   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:25:30.335177   15052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 13:25:30.358266   15052 docker.go:685] Got preloaded images: 
	I0603 13:25:30.358266   15052 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 13:25:30.371422   15052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 13:25:30.397520   15052 ssh_runner.go:195] Run: which lz4
	I0603 13:25:30.406190   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 13:25:30.416083   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:25:30.425889   15052 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:25:30.425889   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 13:25:32.602573   15052 docker.go:649] duration metric: took 2.1961283s to copy over tarball
	I0603 13:25:32.615512   15052 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:25:41.132677   15052 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5170947s)
	I0603 13:25:41.132677   15052 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:25:41.198936   15052 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 13:25:41.219685   15052 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 13:25:41.268541   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:41.460379   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:25:44.392123   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9317196s)
	I0603 13:25:44.404585   15052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 13:25:44.424893   15052 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 13:25:44.424893   15052 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:25:44.424893   15052 kubeadm.go:928] updating node { 172.22.153.250 8443 v1.30.1 docker true true} ...
	I0603 13:25:44.424893   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.153.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:25:44.438080   15052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 13:25:44.472067   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:25:44.472067   15052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 13:25:44.472067   15052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:25:44.472067   15052 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.153.250 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-149700 NodeName:ha-149700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.153.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.153.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:25:44.472469   15052 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.153.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-149700"
	  kubeletExtraArgs:
	    node-ip: 172.22.153.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.153.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:25:44.472469   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:25:44.484194   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:25:44.507949   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:25:44.513841   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:25:44.534251   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:25:44.554405   15052 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:25:44.565567   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 13:25:44.580255   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0603 13:25:44.614980   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:25:44.641482   15052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 13:25:44.669171   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 13:25:44.707720   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:25:44.712456   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:25:44.749641   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:25:44.940318   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:25:44.972554   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.153.250
	I0603 13:25:44.972554   15052 certs.go:194] generating shared ca certs ...
	I0603 13:25:44.972554   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:44.973103   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:25:44.973758   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:25:44.974007   15052 certs.go:256] generating profile certs ...
	I0603 13:25:44.975000   15052 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:25:44.975110   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt with IP's: []
	I0603 13:25:45.211152   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt ...
	I0603 13:25:45.211152   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.crt: {Name:mkd40092c17fb57650e7b7fbf7406b5922892c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.211833   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key ...
	I0603 13:25:45.211833   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key: {Name:mkcf69de3b4a9d0e912390dcbe3d7781732b7884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.213267   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8
	I0603 13:25:45.214285   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.159.254]
	I0603 13:25:45.345867   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 ...
	I0603 13:25:45.345867   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8: {Name:mk68336b476a2079c07481702cd1c43f36b5b5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.347283   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8 ...
	I0603 13:25:45.347283   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8: {Name:mk20fc4aafb5f3cbc5faf210774bf49b7ab01a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.348947   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.5b5144c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:25:45.356765   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.5b5144c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:25:45.362196   15052 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:25:45.363766   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt with IP's: []
	I0603 13:25:45.459849   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt ...
	I0603 13:25:45.459849   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt: {Name:mk20f2de9c598d9a48f4f9f2e3b6b9b2a4e96582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.466739   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key ...
	I0603 13:25:45.466739   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key: {Name:mk507e8c3d191fe53b20c6ca6fc8eae567a9ed39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:25:45.468126   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:25:45.469234   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:25:45.470533   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:25:45.478565   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:25:45.479283   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:25:45.479407   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:25:45.479548   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:25:45.479846   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:25:45.479846   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:25:45.479846   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:45.480955   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:25:45.482780   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:25:45.527537   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:25:45.573232   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:25:45.615867   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:25:45.658020   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:25:45.700855   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:25:45.740483   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:25:45.786374   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:25:45.826029   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:25:45.862903   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:25:45.903634   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:25:45.945848   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:25:45.986472   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:25:46.005262   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:25:46.037466   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.044317   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.055929   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:25:46.075866   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:25:46.109316   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:25:46.140072   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.149241   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.159647   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:25:46.182534   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:25:46.214794   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:25:46.248014   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.251059   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.267340   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:25:46.290866   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:25:46.327040   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:25:46.336704   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:25:46.337107   15052 kubeadm.go:391] StartCluster: {Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:25:46.347489   15052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 13:25:46.377823   15052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 13:25:46.407277   15052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:25:46.434984   15052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:25:46.459495   15052 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:25:46.459540   15052 kubeadm.go:156] found existing configuration files:
	
	I0603 13:25:46.471002   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:25:46.485792   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:25:46.498743   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:25:46.528143   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:25:46.543229   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:25:46.555001   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:25:46.589878   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:25:46.608518   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:25:46.619178   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:25:46.648018   15052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:25:46.664095   15052 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:25:46.676493   15052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:25:46.693673   15052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:25:47.078217   15052 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:26:01.248145   15052 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:26:01.248311   15052 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:26:01.248536   15052 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:26:01.248749   15052 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:26:01.248749   15052 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:26:01.248749   15052 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:26:01.252648   15052 out.go:204]   - Generating certificates and keys ...
	I0603 13:26:01.253004   15052 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:26:01.253168   15052 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 13:26:01.253308   15052 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-149700 localhost] and IPs [172.22.153.250 127.0.0.1 ::1]
	I0603 13:26:01.253895   15052 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 13:26:01.254541   15052 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-149700 localhost] and IPs [172.22.153.250 127.0.0.1 ::1]
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 13:26:01.254669   15052 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:26:01.255208   15052 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:26:01.255367   15052 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:26:01.255446   15052 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:26:01.255974   15052 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:26:01.256223   15052 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:26:01.261213   15052 out.go:204]   - Booting up control plane ...
	I0603 13:26:01.261483   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:26:01.261648   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:26:01.261818   15052 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:26:01.262098   15052 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:26:01.262446   15052 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:26:01.262552   15052 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:26:01.262963   15052 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:26:01.263196   15052 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:26:01.263250   15052 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.208001ms
	I0603 13:26:01.263250   15052 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:26:01.263250   15052 kubeadm.go:309] [api-check] The API server is healthy after 9.113220466s
	I0603 13:26:01.263828   15052 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:26:01.263871   15052 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:26:01.263871   15052 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:26:01.264512   15052 kubeadm.go:309] [mark-control-plane] Marking the node ha-149700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:26:01.264512   15052 kubeadm.go:309] [bootstrap-token] Using token: 5v14cf.t70vxkjeta9v5oor
	I0603 13:26:01.267349   15052 out.go:204]   - Configuring RBAC rules ...
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:26:01.267349   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:26:01.268961   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:26:01.269058   15052 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:26:01.269058   15052 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:26:01.269058   15052 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:26:01.269058   15052 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:26:01.269058   15052 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:26:01.269058   15052 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:26:01.269058   15052 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.269058   15052 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:26:01.269058   15052 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:26:01.269058   15052 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:26:01.269058   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:26:01.271693   15052 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5v14cf.t70vxkjeta9v5oor \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--control-plane 
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:26:01.271693   15052 kubeadm.go:309] 
	I0603 13:26:01.271693   15052 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5v14cf.t70vxkjeta9v5oor \
	I0603 13:26:01.271693   15052 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 13:26:01.271693   15052 cni.go:84] Creating CNI manager for ""
	I0603 13:26:01.271693   15052 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 13:26:01.274785   15052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 13:26:01.291933   15052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 13:26:01.300665   15052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 13:26:01.300665   15052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 13:26:01.349173   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 13:26:02.037816   15052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:26:02.053849   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700 minikube.k8s.io/updated_at=2024_06_03T13_26_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=true
	I0603 13:26:02.054393   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:02.067248   15052 ops.go:34] apiserver oom_adj: -16
	I0603 13:26:02.270112   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:02.771826   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:03.271881   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:03.784185   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:04.276248   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:04.775338   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:05.276629   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:05.776157   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:06.278835   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:06.786505   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:07.283063   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:07.772662   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:08.271795   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:08.781482   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:09.276934   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:09.786075   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:10.270117   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:10.785010   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:11.285597   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:11.783498   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:12.272979   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:12.786406   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:13.275225   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:26:13.440668   15052 kubeadm.go:1107] duration metric: took 11.4029141s to wait for elevateKubeSystemPrivileges
	W0603 13:26:13.440668   15052 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:26:13.440668   15052 kubeadm.go:393] duration metric: took 27.1034717s to StartCluster
	I0603 13:26:13.440668   15052 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:26:13.440668   15052 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:26:13.444308   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:26:13.445491   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 13:26:13.445491   15052 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:26:13.445491   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:26:13.445491   15052 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:26:13.445491   15052 addons.go:69] Setting default-storageclass=true in profile "ha-149700"
	I0603 13:26:13.446035   15052 addons.go:69] Setting storage-provisioner=true in profile "ha-149700"
	I0603 13:26:13.446035   15052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-149700"
	I0603 13:26:13.446167   15052 addons.go:234] Setting addon storage-provisioner=true in "ha-149700"
	I0603 13:26:13.446313   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:26:13.446313   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:26:13.447358   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:13.447757   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:13.605680   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.22.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 13:26:13.994779   15052 start.go:946] {"host.minikube.internal": 172.22.144.1} host record injected into CoreDNS's ConfigMap
	I0603 13:26:15.734189   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:15.734189   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:15.740627   15052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:26:15.745447   15052 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:26:15.745529   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:26:15.745652   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:15.937360   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:15.937692   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:15.938667   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:26:15.939292   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 13:26:15.940822   15052 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 13:26:15.941521   15052 addons.go:234] Setting addon default-storageclass=true in "ha-149700"
	I0603 13:26:15.941584   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:26:15.942945   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:17.998627   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:18.004614   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:18.004817   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:26:18.179336   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:18.179336   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:18.179336   15052 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:26:18.191831   15052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:26:18.191896   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:26:20.449805   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:26:20.455722   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:20.455902   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:26:20.757446   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:26:20.757446   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:20.757915   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:26:20.917319   15052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:26:23.031955   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:26:23.043363   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:23.043503   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:26:23.181225   15052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:26:23.320909   15052 round_trippers.go:463] GET https://172.22.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 13:26:23.320909   15052 round_trippers.go:469] Request Headers:
	I0603 13:26:23.320909   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:26:23.320909   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:26:23.332465   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:26:23.333200   15052 round_trippers.go:463] PUT https://172.22.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 13:26:23.333200   15052 round_trippers.go:469] Request Headers:
	I0603 13:26:23.333200   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:26:23.333200   15052 round_trippers.go:473]     Content-Type: application/json
	I0603 13:26:23.333200   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:26:23.336724   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:26:23.342692   15052 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 13:26:23.345167   15052 addons.go:510] duration metric: took 9.8995932s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 13:26:23.345283   15052 start.go:245] waiting for cluster config update ...
	I0603 13:26:23.345283   15052 start.go:254] writing updated cluster config ...
	I0603 13:26:23.348004   15052 out.go:177] 
	I0603 13:26:23.359402   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:26:23.359733   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:26:23.359982   15052 out.go:177] * Starting "ha-149700-m02" control-plane node in "ha-149700" cluster
	I0603 13:26:23.365962   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:26:23.365962   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:26:23.365962   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:26:23.370181   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:26:23.370374   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:26:23.371071   15052 start.go:360] acquireMachinesLock for ha-149700-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:26:23.371071   15052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-149700-m02"
	I0603 13:26:23.372853   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:26:23.372853   15052 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 13:26:23.373772   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:26:23.375749   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:26:23.375749   15052 client.go:168] LocalClient.Create starting
	I0603 13:26:23.375749   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:26:23.376363   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:26:23.376442   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:26:23.376512   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:26:25.182459   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:26:25.182459   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:25.190703   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:26:26.950329   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:26:26.950329   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:26.951563   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:26:28.390556   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:26:28.390556   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:28.391801   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:26:31.858648   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:26:31.858648   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:31.861007   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:26:32.339528   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:26:33.029688   15052 main.go:141] libmachine: Creating VM...
	I0603 13:26:33.030310   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:35.798351   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:26:35.798351   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:37.519314   15052 main.go:141] libmachine: Creating VHD
	I0603 13:26:37.519314   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:26:41.198703   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 42D308E2-C6AA-49D1-88E4-01A60A34AA2A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:26:41.198703   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:41.198703   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:26:41.198703   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:26:41.208407   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:26:44.306333   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:44.315055   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:44.315055   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd' -SizeBytes 20000MB
	I0603 13:26:46.779376   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:46.779376   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:46.779533   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-149700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:50.286235   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700-m02 -DynamicMemoryEnabled $false
	I0603 13:26:52.450139   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:52.460497   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:52.460497   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700-m02 -Count 2
	I0603 13:26:54.527184   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:54.527184   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:54.536240   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\boot2docker.iso'
	I0603 13:26:57.003684   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:57.012903   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:57.012965   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\disk.vhd'
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:26:59.701873   15052 main.go:141] libmachine: Starting VM...
	I0603 13:26:59.701873   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700-m02
	I0603 13:27:02.870372   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:02.870372   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:02.870372   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:27:02.873843   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:05.125466   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:05.133960   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:05.134039   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:07.595877   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:07.595877   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:08.608280   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:10.752042   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:10.752669   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:10.752669   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:13.189611   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:13.192476   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:14.193658   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:16.340851   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:16.350737   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:16.350737   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:18.825973   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:18.825973   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:19.837035   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:21.968263   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:21.975575   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:21.975575   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:24.492462   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:27:24.492462   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:25.499562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:27.740331   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:30.280091   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:30.291964   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:30.291964   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:32.413454   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:32.413454   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:32.423977   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:27:32.424275   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:34.531811   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:34.531811   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:34.532008   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:37.038005   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:37.038283   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:37.044423   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:37.044574   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:37.045163   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:27:37.174273   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:27:37.174273   15052 buildroot.go:166] provisioning hostname "ha-149700-m02"
	I0603 13:27:37.174273   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:39.255739   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:39.255739   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:39.266184   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:41.761420   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:41.761420   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:41.779651   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:41.780137   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:41.780226   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700-m02 && echo "ha-149700-m02" | sudo tee /etc/hostname
	I0603 13:27:41.937014   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700-m02
	
	I0603 13:27:41.937014   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:44.036454   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:44.036454   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:44.048757   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:46.542534   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:46.542534   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:46.557589   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:27:46.557589   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:27:46.557589   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:27:46.699968   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:27:46.699968   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:27:46.699968   15052 buildroot.go:174] setting up certificates
	I0603 13:27:46.699968   15052 provision.go:84] configureAuth start
	I0603 13:27:46.699968   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:48.767318   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:48.767318   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:48.772848   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:51.210418   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:53.301033   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:53.310589   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:53.310732   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:27:55.737200   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:27:55.747661   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:55.747661   15052 provision.go:143] copyHostCerts
	I0603 13:27:55.747925   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:27:55.748243   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:27:55.748243   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:27:55.748812   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:27:55.750142   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:27:55.750556   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:27:55.750556   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:27:55.750664   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:27:55.751906   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:27:55.751980   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:27:55.751980   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:27:55.752623   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:27:55.753385   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700-m02 san=[127.0.0.1 172.22.154.57 ha-149700-m02 localhost minikube]
	I0603 13:27:55.941777   15052 provision.go:177] copyRemoteCerts
	I0603 13:27:55.952414   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:27:55.952414   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:27:58.010790   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:27:58.020312   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:27:58.020312   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:00.433288   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:00.444122   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:00.444404   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:00.550347   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5978078s)
	I0603 13:28:00.550424   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:28:00.550474   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:28:00.593507   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:28:00.593507   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 13:28:00.637744   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:28:00.638133   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:28:00.678955   15052 provision.go:87] duration metric: took 13.9788718s to configureAuth
	I0603 13:28:00.679070   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:28:00.679750   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:28:00.679750   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:02.744660   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:02.754643   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:02.754643   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:05.168486   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:05.179367   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:05.185509   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:05.185509   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:05.186101   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:28:05.317399   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:28:05.317497   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:28:05.317677   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:28:05.317880   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:07.394286   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:07.399922   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:07.399922   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:09.875707   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:09.875707   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:09.882684   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:09.883375   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:09.883375   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.153.250"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:28:10.037701   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.153.250
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:28:10.037803   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:12.101487   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:14.551492   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:14.562458   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:14.568736   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:14.568830   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:14.568830   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:28:16.634766   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:28:16.634879   15052 machine.go:97] duration metric: took 44.210536s to provisionDockerMachine
	I0603 13:28:16.634879   15052 client.go:171] duration metric: took 1m53.2581908s to LocalClient.Create
	I0603 13:28:16.634879   15052 start.go:167] duration metric: took 1m53.2581908s to libmachine.API.Create "ha-149700"
	I0603 13:28:16.634879   15052 start.go:293] postStartSetup for "ha-149700-m02" (driver="hyperv")
	I0603 13:28:16.634879   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:28:16.646878   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:28:16.646878   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:18.718873   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:18.718873   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:18.729699   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:21.164696   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:21.164696   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:21.174923   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:21.284908   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6379919s)
	I0603 13:28:21.296216   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:28:21.305185   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:28:21.305185   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:28:21.305907   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:28:21.307045   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:28:21.307112   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:28:21.318141   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:28:21.338414   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:28:21.382954   15052 start.go:296] duration metric: took 4.7480358s for postStartSetup
	I0603 13:28:21.385675   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:23.482324   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:23.482324   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:23.482472   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:25.950235   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:25.950235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:25.960385   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:28:25.962948   15052 start.go:128] duration metric: took 2m2.5889728s to createHost
	I0603 13:28:25.963037   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:28.017950   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:28.028400   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:28.028400   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:30.456482   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:30.466513   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:30.471890   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:30.472619   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:30.472619   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:28:30.606907   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421310.609726096
	
	I0603 13:28:30.606907   15052 fix.go:216] guest clock: 1717421310.609726096
	I0603 13:28:30.606907   15052 fix.go:229] Guest: 2024-06-03 13:28:30.609726096 +0000 UTC Remote: 2024-06-03 13:28:25.9629487 +0000 UTC m=+329.152027201 (delta=4.646777396s)
	I0603 13:28:30.606907   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:32.667509   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:35.079610   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:35.079610   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:35.098534   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:28:35.099040   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.57 22 <nil> <nil>}
	I0603 13:28:35.099097   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421310
	I0603 13:28:35.241426   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:28:30 UTC 2024
	
	I0603 13:28:35.241426   15052 fix.go:236] clock set: Mon Jun  3 13:28:30 UTC 2024
	 (err=<nil>)
	I0603 13:28:35.241426   15052 start.go:83] releasing machines lock for "ha-149700-m02", held for 2m11.867636s
	I0603 13:28:35.242106   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:37.308361   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:37.308627   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:37.308627   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:39.773580   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:39.773646   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:39.781728   15052 out.go:177] * Found network options:
	I0603 13:28:39.784066   15052 out.go:177]   - NO_PROXY=172.22.153.250
	W0603 13:28:39.786860   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:28:39.788955   15052 out.go:177]   - NO_PROXY=172.22.153.250
	W0603 13:28:39.791476   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:28:39.792934   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:28:39.793420   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:28:39.793420   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:39.798396   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:28:39.798396   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m02 ).state
	I0603 13:28:41.934598   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:41.934760   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:41.934760   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:41.972760   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:41.973120   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:41.973120   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:44.478130   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:44.478130   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:44.478130   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:44.503248   15052 main.go:141] libmachine: [stdout =====>] : 172.22.154.57
	
	I0603 13:28:44.503248   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:44.504838   15052 sshutil.go:53] new ssh client: &{IP:172.22.154.57 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m02\id_rsa Username:docker}
	I0603 13:28:44.566373   15052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.767938s)
	W0603 13:28:44.566373   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:28:44.580075   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:28:44.842809   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:28:44.842946   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:28:44.842946   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0494841s)
	I0603 13:28:44.843029   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:28:44.887596   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:28:44.918380   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:28:44.935196   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:28:44.947173   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:28:44.975105   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:28:45.006088   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:28:45.034679   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:28:45.068502   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:28:45.100251   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:28:45.129981   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:28:45.159328   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:28:45.191917   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:28:45.220515   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:28:45.249195   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:45.433581   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:28:45.464127   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:28:45.476812   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:28:45.513000   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:28:45.548426   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:28:45.582583   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:28:45.619289   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:28:45.654075   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:28:45.713688   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:28:45.735183   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:28:45.784476   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:28:45.803319   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:28:45.822848   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:28:45.864576   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:28:46.070335   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:28:46.246159   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:28:46.246159   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:28:46.290892   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:46.475516   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:28:48.962273   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4867365s)
	I0603 13:28:48.976004   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:28:49.018930   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:28:49.055781   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:28:49.242449   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:28:49.425546   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:49.612903   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:28:49.653295   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:28:49.686640   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:28:49.870462   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:28:49.970135   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:28:49.982958   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:28:49.992982   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:28:50.004725   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:28:50.022270   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:28:50.082427   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:28:50.092200   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:28:50.130445   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:28:50.162776   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:28:50.165368   15052 out.go:177]   - env NO_PROXY=172.22.153.250
	I0603 13:28:50.168180   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:28:50.172608   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:28:50.174487   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:28:50.174487   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:28:50.187406   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:28:50.194171   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:28:50.214218   15052 mustload.go:65] Loading cluster: ha-149700
	I0603 13:28:50.214833   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:28:50.215362   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:28:52.256472   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:52.256472   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:52.265716   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:28:52.265985   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.154.57
	I0603 13:28:52.265985   15052 certs.go:194] generating shared ca certs ...
	I0603 13:28:52.265985   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.267374   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:28:52.267744   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:28:52.267906   15052 certs.go:256] generating profile certs ...
	I0603 13:28:52.268627   15052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:28:52.268703   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0
	I0603 13:28:52.268854   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.154.57 172.22.159.254]
	I0603 13:28:52.402707   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 ...
	I0603 13:28:52.402707   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0: {Name:mkf4a9eb687790cb623fb705825c463597bc32ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.410570   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0 ...
	I0603 13:28:52.410570   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0: {Name:mk6a70665679a6c2cb0a4ffbe757b331292f3a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:28:52.412974   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.d47302e0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:28:52.424815   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.d47302e0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:28:52.426336   15052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:28:52.426336   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:28:52.426894   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:28:52.427131   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:28:52.427131   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:28:52.427726   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:28:52.427726   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:28:52.428657   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:28:52.428948   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:28:52.428948   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:28:52.429580   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:28:52.430005   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:28:52.430339   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:28:52.430451   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:28:52.430451   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:28:52.431085   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:28:52.431220   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:52.431412   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:28:54.470978   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:28:54.470978   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:54.482321   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:28:56.919132   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:28:56.930251   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:28:56.930443   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:28:57.036492   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 13:28:57.047378   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 13:28:57.079124   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 13:28:57.087293   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 13:28:57.117378   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 13:28:57.120313   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 13:28:57.157391   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 13:28:57.163239   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 13:28:57.193061   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 13:28:57.196254   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 13:28:57.229212   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 13:28:57.236843   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 13:28:57.253851   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:28:57.302029   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:28:57.349019   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:28:57.393306   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:28:57.440571   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 13:28:57.493971   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:28:57.553685   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:28:57.600724   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:28:57.646616   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:28:57.690112   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:28:57.730626   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:28:57.777177   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 13:28:57.819798   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 13:28:57.857258   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 13:28:57.889603   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 13:28:57.919090   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 13:28:57.952236   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 13:28:57.983344   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 13:28:58.023491   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:28:58.044069   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:28:58.077341   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.084717   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.094757   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:28:58.119336   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:28:58.151941   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:28:58.184272   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.192018   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.204151   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:28:58.225785   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:28:58.258025   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:28:58.290530   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.297137   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.315010   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:28:58.334136   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:28:58.367776   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:28:58.374986   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:28:58.375233   15052 kubeadm.go:928] updating node {m02 172.22.154.57 8443 v1.30.1 docker true true} ...
	I0603 13:28:58.375233   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.154.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:28:58.375233   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:28:58.387078   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:28:58.412222   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:28:58.412503   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:28:58.424857   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:28:58.441675   15052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 13:28:58.453351   15052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 13:28:58.472434   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0603 13:28:58.472753   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0603 13:28:58.472753   15052 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0603 13:28:59.511199   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:28:59.520154   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:28:59.533432   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 13:28:59.533432   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 13:29:01.277130   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:29:01.287605   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:29:01.300831   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 13:29:01.300951   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 13:29:02.958744   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:29:02.983587   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:29:02.994599   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:29:03.003819   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 13:29:03.003988   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 13:29:03.677042   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 13:29:03.693378   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 13:29:03.726053   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:29:03.754414   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 13:29:03.797226   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:29:03.804099   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:29:03.838970   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:29:04.017559   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:29:04.044776   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:29:04.045807   15052 start.go:316] joinCluster: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:29:04.046087   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 13:29:04.046145   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:29:06.069546   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:29:06.069546   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:29:06.080095   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:29:08.584022   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:29:08.584022   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:29:08.584228   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:29:08.786876   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7407502s)
	I0603 13:29:08.786950   15052 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:29:08.786950   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lli69i.sq06vzkgggvy6rlu --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m02 --control-plane --apiserver-advertise-address=172.22.154.57 --apiserver-bind-port=8443"
	I0603 13:29:50.868555   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lli69i.sq06vzkgggvy6rlu --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m02 --control-plane --apiserver-advertise-address=172.22.154.57 --apiserver-bind-port=8443": (42.0812591s)
	I0603 13:29:50.868693   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 13:29:51.652571   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700-m02 minikube.k8s.io/updated_at=2024_06_03T13_29_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=false
	I0603 13:29:51.825727   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-149700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 13:29:52.015180   15052 start.go:318] duration metric: took 47.9689797s to joinCluster
	I0603 13:29:52.015180   15052 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:29:52.017492   15052 out.go:177] * Verifying Kubernetes components...
	I0603 13:29:52.015180   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:29:52.034513   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:29:52.413981   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:29:52.445731   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:29:52.446513   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 13:29:52.446666   15052 kubeadm.go:477] Overriding stale ClientConfig host https://172.22.159.254:8443 with https://172.22.153.250:8443
	I0603 13:29:52.447561   15052 node_ready.go:35] waiting up to 6m0s for node "ha-149700-m02" to be "Ready" ...
	I0603 13:29:52.447561   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:52.447561   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:52.447561   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:52.447561   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:52.462269   15052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 13:29:52.954176   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:52.954240   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:52.954240   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:52.954240   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:52.967625   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:29:53.448452   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:53.448452   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:53.448696   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:53.448696   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:53.452433   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:29:53.957167   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:53.957167   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:53.957167   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:53.957167   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:53.964407   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:29:54.463333   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:54.463333   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:54.463447   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:54.463447   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:54.468385   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:54.468920   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:54.953094   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:54.953205   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:54.953205   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:54.953205   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:54.960845   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:29:55.458917   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:55.459019   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:55.459019   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:55.459019   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:55.464195   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:29:55.948253   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:55.948253   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:55.948479   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:55.948479   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:55.955021   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:29:56.457554   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:56.457554   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:56.457554   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:56.457554   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:56.463293   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:29:56.963423   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:56.963502   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:56.963502   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:56.963544   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:56.979154   15052 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 13:29:56.986390   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:57.452091   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:57.452091   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:57.452091   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:57.452091   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:57.456703   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:57.958820   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:57.958820   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:57.958911   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:57.958911   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:57.967830   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:29:58.463726   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:58.463978   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:58.463978   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:58.463978   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:58.470640   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:29:58.950887   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:58.950887   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:58.950976   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:58.950976   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:58.955658   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:59.451510   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:59.451714   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:59.451714   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:59.451714   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:59.455975   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:29:59.457549   15052 node_ready.go:53] node "ha-149700-m02" has status "Ready":"False"
	I0603 13:29:59.957338   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:29:59.957532   15052 round_trippers.go:469] Request Headers:
	I0603 13:29:59.957532   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:29:59.957532   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:29:59.962709   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:00.460412   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:00.460500   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.460500   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.460500   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.465493   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.467447   15052 node_ready.go:49] node "ha-149700-m02" has status "Ready":"True"
	I0603 13:30:00.467514   15052 node_ready.go:38] duration metric: took 8.0198873s for node "ha-149700-m02" to be "Ready" ...
	I0603 13:30:00.467514   15052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:30:00.467514   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:00.467514   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.467739   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.467759   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.476103   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:00.487900   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.487900   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6qmlg
	I0603 13:30:00.488436   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.488436   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.488473   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.492237   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:00.493882   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.493985   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.493985   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.493985   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.497788   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:00.499016   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.499016   15052 pod_ready.go:81] duration metric: took 11.1154ms for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.499212   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.499212   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ptqqz
	I0603 13:30:00.499318   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.499318   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.499318   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.506306   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:00.506524   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.507109   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.507109   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.507109   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.514324   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:00.515051   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.515051   15052 pod_ready.go:81] duration metric: took 15.8387ms for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.515051   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.515126   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700
	I0603 13:30:00.515204   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.515204   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.515204   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.522393   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:00.523755   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:00.523909   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.523909   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.523981   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.528442   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.529116   15052 pod_ready.go:92] pod "etcd-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:00.529162   15052 pod_ready.go:81] duration metric: took 14.1118ms for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.529162   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:00.529299   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:00.529299   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.529299   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.529299   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.533574   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:00.535141   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:00.535187   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:00.535187   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:00.535187   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:00.538399   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:01.035001   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:01.035001   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.035001   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.035001   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.040608   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:01.041747   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:01.041747   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.041747   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.041747   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.047237   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:01.532603   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:01.532697   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.532697   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.532697   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.536174   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:01.537589   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:01.537589   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:01.537589   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:01.537589   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:01.542236   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.032233   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:02.032305   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.032305   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.032305   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.040449   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:02.041188   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:02.041188   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.041188   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.041188   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.046008   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.533417   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:02.533417   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.533417   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.533417   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.538351   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.539720   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:02.539720   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:02.539720   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:02.539720   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:02.543773   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:02.544919   15052 pod_ready.go:102] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 13:30:03.031327   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:03.031577   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.031577   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.031577   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.036885   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:03.038036   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:03.038036   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.038036   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.038172   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.043312   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:03.544573   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:03.544675   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.544675   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.544675   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.549618   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:03.549923   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:03.549923   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:03.549923   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:03.549923   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:03.554676   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:04.042693   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:04.042965   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.042965   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.042965   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.051286   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:04.052197   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:04.052197   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.052197   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.052197   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.060935   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:04.544230   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:04.544435   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.544435   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.544435   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.555032   15052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 13:30:04.556027   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:04.556027   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:04.556027   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:04.556027   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:04.559288   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:04.561008   15052 pod_ready.go:102] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 13:30:05.044010   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:30:05.044117   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.044117   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.044117   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.049389   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:05.049885   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.049885   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.049885   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.049885   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.055061   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:05.056548   15052 pod_ready.go:92] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.056630   15052 pod_ready.go:81] duration metric: took 4.5273488s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.056630   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.056744   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700
	I0603 13:30:05.056772   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.056772   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.056772   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.060419   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:05.061193   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.061193   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.061193   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.061193   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.065633   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:05.066960   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.067051   15052 pod_ready.go:81] duration metric: took 10.4214ms for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.067051   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.067138   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:30:05.067138   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.067138   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.067138   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.073282   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:05.074254   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.074254   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.074840   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.074840   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.078835   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:05.078835   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.078835   15052 pod_ready.go:81] duration metric: took 11.7837ms for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.078835   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.078835   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:30:05.079831   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.079831   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.079831   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.087786   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:05.260639   15052 request.go:629] Waited for 171.5741ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.260930   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:05.260930   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.260930   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.261011   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.269512   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:05.270436   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.270436   15052 pod_ready.go:81] duration metric: took 191.5993ms for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.270436   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.462406   15052 request.go:629] Waited for 191.2509ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:30:05.462605   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:30:05.462605   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.462684   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.462684   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.471322   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:05.667451   15052 request.go:629] Waited for 194.8664ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.667607   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:05.667607   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.667607   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.667666   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.672058   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:30:05.673463   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:05.673519   15052 pod_ready.go:81] duration metric: took 403.0797ms for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.673519   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:05.871763   15052 request.go:629] Waited for 197.9871ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:30:05.872006   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:30:05.872075   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:05.872075   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:05.872075   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:05.879276   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:06.061846   15052 request.go:629] Waited for 181.3424ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.061846   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.061846   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.061846   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.062123   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.067604   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.068423   15052 pod_ready.go:92] pod "kube-proxy-9wjpn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.068521   15052 pod_ready.go:81] duration metric: took 394.9987ms for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.068521   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.267350   15052 request.go:629] Waited for 198.4714ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:30:06.267520   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:30:06.267634   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.267634   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.267634   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.272942   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.469661   15052 request.go:629] Waited for 195.5802ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:06.469931   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:06.469931   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.469931   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.469931   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.474972   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.476322   15052 pod_ready.go:92] pod "kube-proxy-vbzvt" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.476322   15052 pod_ready.go:81] duration metric: took 407.7974ms for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.476322   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.672423   15052 request.go:629] Waited for 195.9004ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:30:06.672591   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:30:06.672591   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.672591   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.672591   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.676204   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:30:06.862303   15052 request.go:629] Waited for 184.2693ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.862410   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:30:06.862500   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:06.862500   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:06.862500   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:06.867907   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:06.868299   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:06.868299   15052 pod_ready.go:81] duration metric: took 391.9743ms for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:06.868299   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:07.068758   15052 request.go:629] Waited for 200.2059ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:30:07.068889   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:30:07.068889   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.068889   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.069085   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.076486   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:30:07.271302   15052 request.go:629] Waited for 193.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:07.271302   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:30:07.271302   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.271302   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.271302   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.277006   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:30:07.279783   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:30:07.279783   15052 pod_ready.go:81] duration metric: took 411.4807ms for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:30:07.279783   15052 pod_ready.go:38] duration metric: took 6.8122135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:30:07.279783   15052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:30:07.291986   15052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:30:07.320968   15052 api_server.go:72] duration metric: took 15.3056628s to wait for apiserver process to appear ...
	I0603 13:30:07.321002   15052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:30:07.321102   15052 api_server.go:253] Checking apiserver healthz at https://172.22.153.250:8443/healthz ...
	I0603 13:30:07.331095   15052 api_server.go:279] https://172.22.153.250:8443/healthz returned 200:
	ok
	I0603 13:30:07.331132   15052 round_trippers.go:463] GET https://172.22.153.250:8443/version
	I0603 13:30:07.331132   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.331132   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.331132   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.333131   15052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 13:30:07.333378   15052 api_server.go:141] control plane version: v1.30.1
	I0603 13:30:07.333378   15052 api_server.go:131] duration metric: took 12.309ms to wait for apiserver health ...
	I0603 13:30:07.333378   15052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:30:07.474670   15052 request.go:629] Waited for 141.0662ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.474670   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.474670   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.474670   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.474670   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.484355   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:30:07.493027   15052 system_pods.go:59] 17 kube-system pods found
	I0603 13:30:07.493027   15052 system_pods.go:61] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:30:07.493027   15052 system_pods.go:61] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:30:07.494128   15052 system_pods.go:61] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:30:07.494128   15052 system_pods.go:61] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:30:07.494128   15052 system_pods.go:74] duration metric: took 160.7492ms to wait for pod list to return data ...
	I0603 13:30:07.494128   15052 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:30:07.675477   15052 request.go:629] Waited for 181.103ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:30:07.675477   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:30:07.675477   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.675477   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.675477   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.681932   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:07.683288   15052 default_sa.go:45] found service account: "default"
	I0603 13:30:07.683394   15052 default_sa.go:55] duration metric: took 189.2638ms for default service account to be created ...
	I0603 13:30:07.683394   15052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:30:07.862294   15052 request.go:629] Waited for 178.6395ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.862409   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:30:07.862409   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:07.862409   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:07.862409   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:07.870950   15052 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 13:30:07.877953   15052 system_pods.go:86] 17 kube-system pods found
	I0603 13:30:07.878095   15052 system_pods.go:89] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:30:07.878095   15052 system_pods.go:89] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:30:07.878095   15052 system_pods.go:126] duration metric: took 194.7ms to wait for k8s-apps to be running ...
	I0603 13:30:07.878095   15052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:30:07.888204   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:30:07.913061   15052 system_svc.go:56] duration metric: took 34.9657ms WaitForService to wait for kubelet
	I0603 13:30:07.913476   15052 kubeadm.go:576] duration metric: took 15.8981662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:30:07.913545   15052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:30:08.066340   15052 request.go:629] Waited for 152.5797ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes
	I0603 13:30:08.066340   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes
	I0603 13:30:08.066441   15052 round_trippers.go:469] Request Headers:
	I0603 13:30:08.066441   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:30:08.066441   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:30:08.072780   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:30:08.074014   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:30:08.074014   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:30:08.074093   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:30:08.074093   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:30:08.074093   15052 node_conditions.go:105] duration metric: took 160.5468ms to run NodePressure ...
	I0603 13:30:08.074093   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:30:08.074152   15052 start.go:254] writing updated cluster config ...
	I0603 13:30:08.078758   15052 out.go:177] 
	I0603 13:30:08.094685   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:30:08.094685   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:30:08.103583   15052 out.go:177] * Starting "ha-149700-m03" control-plane node in "ha-149700" cluster
	I0603 13:30:08.107025   15052 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 13:30:08.107025   15052 cache.go:56] Caching tarball of preloaded images
	I0603 13:30:08.107925   15052 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 13:30:08.107925   15052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 13:30:08.107925   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:30:08.115050   15052 start.go:360] acquireMachinesLock for ha-149700-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:30:08.115050   15052 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-149700-m03"
	I0603 13:30:08.115050   15052 start.go:93] Provisioning new machine with config: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:30:08.115050   15052 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0603 13:30:08.118434   15052 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:30:08.119164   15052 start.go:159] libmachine.API.Create for "ha-149700" (driver="hyperv")
	I0603 13:30:08.119164   15052 client.go:168] LocalClient.Create starting
	I0603 13:30:08.119276   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 13:30:08.119853   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:08.119853   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:08.120063   15052 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 13:30:08.120363   15052 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:08.120363   15052 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:08.120363   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 13:30:10.015264   15052 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 13:30:10.015264   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:10.015562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 13:30:11.741480   15052 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 13:30:11.741480   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:11.741974   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:30:13.220804   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:30:13.220804   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:13.221126   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:30:17.005641   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:30:17.005641   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:17.007675   15052 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:30:17.454215   15052 main.go:141] libmachine: Creating SSH key...
	I0603 13:30:17.825622   15052 main.go:141] libmachine: Creating VM...
	I0603 13:30:17.826094   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 13:30:20.775235   15052 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 13:30:20.775235   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:20.775727   15052 main.go:141] libmachine: Using switch "Default Switch"
	I0603 13:30:20.775727   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 13:30:22.589318   15052 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 13:30:22.589562   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:22.589562   15052 main.go:141] libmachine: Creating VHD
	I0603 13:30:22.589562   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 13:30:26.382157   15052 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 19722408-E759-4665-8C15-7BCF2EB0A2DC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 13:30:26.382157   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:26.382157   15052 main.go:141] libmachine: Writing magic tar header
	I0603 13:30:26.382411   15052 main.go:141] libmachine: Writing SSH key tar header
	I0603 13:30:26.392212   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 13:30:29.644578   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:29.644578   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:29.645582   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd' -SizeBytes 20000MB
	I0603 13:30:32.228014   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:32.228014   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:32.228486   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 13:30:36.056864   15052 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-149700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 13:30:36.057564   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:36.057643   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-149700-m03 -DynamicMemoryEnabled $false
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:38.432218   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-149700-m03 -Count 2
	I0603 13:30:40.667864   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:40.667864   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:40.668696   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\boot2docker.iso'
	I0603 13:30:43.347702   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:43.348461   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:43.348602   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-149700-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\disk.vhd'
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:46.040459   15052 main.go:141] libmachine: Starting VM...
	I0603 13:30:46.040459   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-149700-m03
	I0603 13:30:49.180909   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:49.180909   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:49.180909   15052 main.go:141] libmachine: Waiting for host to start...
	I0603 13:30:49.181040   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:51.490364   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:30:54.147172   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:30:54.147172   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:55.158279   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:30:57.446823   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:30:57.446823   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:30:57.447001   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:00.068774   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:00.069775   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:01.070935   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:03.337695   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:03.337747   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:03.337747   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:05.973988   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:05.973988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:06.981788   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:09.292477   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:09.292477   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:09.293673   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:11.894224   15052 main.go:141] libmachine: [stdout =====>] : 
	I0603 13:31:11.894224   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:12.907173   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:15.184116   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:15.184116   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:15.184399   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:17.858045   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:17.858397   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:17.858397   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:20.074722   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:20.074722   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:20.074722   15052 machine.go:94] provisionDockerMachine start ...
	I0603 13:31:20.074912   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:22.348883   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:22.348883   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:22.349091   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:24.964972   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:24.964972   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:24.970822   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:24.982611   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:24.982611   15052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:31:25.117662   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:31:25.117773   15052 buildroot.go:166] provisioning hostname "ha-149700-m03"
	I0603 13:31:25.117893   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:27.347138   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:27.347687   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:27.347776   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:30.005863   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:30.005863   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:30.014397   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:30.014397   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:30.014397   15052 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-149700-m03 && echo "ha-149700-m03" | sudo tee /etc/hostname
	I0603 13:31:30.178389   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-149700-m03
	
	I0603 13:31:30.179496   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:32.383359   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:35.015458   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:35.016320   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:35.021944   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:35.022645   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:35.022645   15052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-149700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-149700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-149700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:31:35.178693   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:31:35.179228   15052 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 13:31:35.179266   15052 buildroot.go:174] setting up certificates
	I0603 13:31:35.179266   15052 provision.go:84] configureAuth start
	I0603 13:31:35.179389   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:37.413519   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:37.413519   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:37.413736   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:40.036271   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:42.245041   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:42.245645   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:42.245701   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:44.856230   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:44.856721   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:44.856721   15052 provision.go:143] copyHostCerts
	I0603 13:31:44.856879   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 13:31:44.857150   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 13:31:44.857150   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 13:31:44.857637   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 13:31:44.858797   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 13:31:44.859048   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 13:31:44.859048   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 13:31:44.859531   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 13:31:44.860776   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 13:31:44.861090   15052 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 13:31:44.861149   15052 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 13:31:44.861176   15052 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 13:31:44.862166   15052 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-149700-m03 san=[127.0.0.1 172.22.150.43 ha-149700-m03 localhost minikube]
	I0603 13:31:44.976898   15052 provision.go:177] copyRemoteCerts
	I0603 13:31:44.989314   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:31:44.989314   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:47.205207   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:49.885247   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:49.886129   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:49.886299   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:31:49.991825   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0024692s)
	I0603 13:31:49.991825   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 13:31:49.991825   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:31:50.041834   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 13:31:50.042379   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 13:31:50.095009   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 13:31:50.095564   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:31:50.146141   15052 provision.go:87] duration metric: took 14.9666891s to configureAuth
	I0603 13:31:50.146264   15052 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:31:50.147069   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:31:50.147187   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:52.320460   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:52.321484   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:52.321533   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:54.921348   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:54.921411   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:54.927585   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:54.927585   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:54.927585   15052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 13:31:55.064169   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 13:31:55.064262   15052 buildroot.go:70] root file system type: tmpfs
	I0603 13:31:55.064585   15052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 13:31:55.064662   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:31:57.260482   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:31:57.260482   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:57.260629   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:31:59.865074   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:31:59.865074   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:31:59.870830   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:31:59.871509   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:31:59.871509   15052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.153.250"
	Environment="NO_PROXY=172.22.153.250,172.22.154.57"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 13:32:00.039715   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.153.250
	Environment=NO_PROXY=172.22.153.250,172.22.154.57
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 13:32:00.039799   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:02.204696   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:02.204696   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:02.204878   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:04.787867   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:04.788637   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:04.797317   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:04.797317   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:04.797317   15052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 13:32:07.025186   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 13:32:07.025738   15052 machine.go:97] duration metric: took 46.950631s to provisionDockerMachine
	I0603 13:32:07.025738   15052 client.go:171] duration metric: took 1m58.9054871s to LocalClient.Create
	I0603 13:32:07.025878   15052 start.go:167] duration metric: took 1m58.9057386s to libmachine.API.Create "ha-149700"
	I0603 13:32:07.025878   15052 start.go:293] postStartSetup for "ha-149700-m03" (driver="hyperv")
	I0603 13:32:07.025878   15052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:32:07.040879   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:32:07.040879   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:09.221392   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:09.221392   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:09.221811   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:11.872771   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:11.873572   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:11.873690   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:11.988145   15052 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9472252s)
	I0603 13:32:12.000957   15052 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:32:12.008518   15052 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:32:12.008636   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 13:32:12.009126   15052 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 13:32:12.010124   15052 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 13:32:12.010124   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 13:32:12.022455   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:32:12.043727   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 13:32:12.095244   15052 start.go:296] duration metric: took 5.0693246s for postStartSetup
	I0603 13:32:12.098116   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:14.284282   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:14.284988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:14.284988   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:16.905317   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:16.905317   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:16.906089   15052 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\config.json ...
	I0603 13:32:16.908563   15052 start.go:128] duration metric: took 2m8.7924569s to createHost
	I0603 13:32:16.908625   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:19.136241   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:19.137152   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:19.137285   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:21.803366   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:21.803366   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:21.809757   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:21.810340   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:21.810541   15052 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:32:21.944831   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421541.939964799
	
	I0603 13:32:21.944918   15052 fix.go:216] guest clock: 1717421541.939964799
	I0603 13:32:21.944918   15052 fix.go:229] Guest: 2024-06-03 13:32:21.939964799 +0000 UTC Remote: 2024-06-03 13:32:16.9086259 +0000 UTC m=+560.095810701 (delta=5.031338899s)
	I0603 13:32:21.945005   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:24.194988   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:26.854859   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:26.855012   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:26.860603   15052 main.go:141] libmachine: Using SSH client type: native
	I0603 13:32:26.861383   15052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.43 22 <nil> <nil>}
	I0603 13:32:26.861383   15052 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717421541
	I0603 13:32:27.017953   15052 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 13:32:21 UTC 2024
	
	I0603 13:32:27.017953   15052 fix.go:236] clock set: Mon Jun  3 13:32:21 UTC 2024
	 (err=<nil>)
	I0603 13:32:27.017953   15052 start.go:83] releasing machines lock for "ha-149700-m03", held for 2m18.9017639s
	I0603 13:32:27.017953   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:29.236678   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:29.237477   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:29.237477   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:31.863157   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:31.863157   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:31.866636   15052 out.go:177] * Found network options:
	I0603 13:32:31.869974   15052 out.go:177]   - NO_PROXY=172.22.153.250,172.22.154.57
	W0603 13:32:31.872093   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.872093   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:32:31.874949   15052 out.go:177]   - NO_PROXY=172.22.153.250,172.22.154.57
	W0603 13:32:31.877914   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.877914   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.879468   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 13:32:31.879543   15052 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 13:32:31.882419   15052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:32:31.882480   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:31.892926   15052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:32:31.892926   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700-m03 ).state
	I0603 13:32:34.143902   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:34.144175   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:34.144175   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:34.165181   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:34.166103   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:34.166103   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700-m03 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:37.003054   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:37.003333   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:37.003570   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:37.028972   15052 main.go:141] libmachine: [stdout =====>] : 172.22.150.43
	
	I0603 13:32:37.028972   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:37.029621   15052 sshutil.go:53] new ssh client: &{IP:172.22.150.43 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700-m03\id_rsa Username:docker}
	I0603 13:32:37.158511   15052 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2655414s)
	W0603 13:32:37.158677   15052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:32:37.158677   15052 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.276215s)
	I0603 13:32:37.171169   15052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:32:37.200165   15052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:32:37.200301   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:32:37.200505   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:32:37.250315   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 13:32:37.283316   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 13:32:37.304197   15052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 13:32:37.316443   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 13:32:37.348762   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:32:37.381957   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 13:32:37.413995   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 13:32:37.451388   15052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:32:37.486007   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 13:32:37.518651   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 13:32:37.552843   15052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 13:32:37.586730   15052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:32:37.619410   15052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:32:37.651691   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:37.863545   15052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 13:32:37.896459   15052 start.go:494] detecting cgroup driver to use...
	I0603 13:32:37.911973   15052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 13:32:37.956554   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:32:37.992217   15052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:32:38.037960   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:32:38.075746   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:32:38.113079   15052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 13:32:38.177594   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 13:32:38.201897   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:32:38.247850   15052 ssh_runner.go:195] Run: which cri-dockerd
	I0603 13:32:38.264863   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 13:32:38.281720   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 13:32:38.325611   15052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 13:32:38.536285   15052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 13:32:38.728593   15052 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 13:32:38.728675   15052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 13:32:38.773321   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:38.998449   15052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 13:32:41.538132   15052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5396621s)
	I0603 13:32:41.553586   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 13:32:41.595738   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:32:41.635351   15052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 13:32:41.855171   15052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 13:32:42.062671   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:42.277851   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 13:32:42.322829   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 13:32:42.361039   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:42.578360   15052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 13:32:42.691063   15052 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 13:32:42.703351   15052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 13:32:42.712429   15052 start.go:562] Will wait 60s for crictl version
	I0603 13:32:42.725300   15052 ssh_runner.go:195] Run: which crictl
	I0603 13:32:42.743190   15052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:32:42.800669   15052 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 13:32:42.810062   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:32:42.858169   15052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 13:32:42.893587   15052 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 13:32:42.895978   15052 out.go:177]   - env NO_PROXY=172.22.153.250
	I0603 13:32:42.899442   15052 out.go:177]   - env NO_PROXY=172.22.153.250,172.22.154.57
	I0603 13:32:42.902734   15052 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 13:32:42.906941   15052 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 13:32:42.910159   15052 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 13:32:42.910159   15052 ip.go:210] interface addr: 172.22.144.1/20
	I0603 13:32:42.922073   15052 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 13:32:42.931128   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:32:42.956422   15052 mustload.go:65] Loading cluster: ha-149700
	I0603 13:32:42.957191   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:32:42.957985   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:45.161140   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:45.161349   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:45.161349   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:32:45.163803   15052 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700 for IP: 172.22.150.43
	I0603 13:32:45.163803   15052 certs.go:194] generating shared ca certs ...
	I0603 13:32:45.163803   15052 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.164383   15052 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 13:32:45.164919   15052 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 13:32:45.165144   15052 certs.go:256] generating profile certs ...
	I0603 13:32:45.165285   15052 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\client.key
	I0603 13:32:45.165285   15052 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9
	I0603 13:32:45.165285   15052 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.153.250 172.22.154.57 172.22.150.43 172.22.159.254]
	I0603 13:32:45.425427   15052 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 ...
	I0603 13:32:45.425427   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9: {Name:mke9e0949185c0a71159b79a255f9c85fc9b5e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.426411   15052 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9 ...
	I0603 13:32:45.426411   15052 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9: {Name:mkeb05129fdadc43e68981aff8b83abf95ceefd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:32:45.427443   15052 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt.e71a32e9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt
	I0603 13:32:45.438963   15052 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key.e71a32e9 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key
	I0603 13:32:45.441103   15052 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key
	I0603 13:32:45.441103   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:32:45.441334   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:32:45.441518   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:32:45.441698   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:32:45.441791   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:32:45.442585   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:32:45.442857   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 13:32:45.442857   15052 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 13:32:45.443436   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 13:32:45.443630   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 13:32:45.443630   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 13:32:45.444161   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 13:32:45.444479   15052 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 13:32:45.444479   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 13:32:45.445082   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:45.445263   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 13:32:45.445534   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:47.698840   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:47.698840   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:47.698928   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:50.381834   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:32:50.381834   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:50.382427   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:32:50.487975   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 13:32:50.496255   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 13:32:50.532002   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 13:32:50.543071   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0603 13:32:50.578560   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 13:32:50.586361   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 13:32:50.619126   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 13:32:50.624623   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0603 13:32:50.661168   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 13:32:50.668623   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 13:32:50.701188   15052 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 13:32:50.707337   15052 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 13:32:50.727851   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:32:50.779098   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:32:50.830009   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:32:50.877439   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 13:32:50.931615   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 13:32:50.980919   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:32:51.026832   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:32:51.077131   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-149700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:32:51.132545   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 13:32:51.181374   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:32:51.230234   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 13:32:51.279831   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 13:32:51.313071   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0603 13:32:51.349063   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 13:32:51.384805   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0603 13:32:51.426131   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 13:32:51.464842   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 13:32:51.502127   15052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 13:32:51.551845   15052 ssh_runner.go:195] Run: openssl version
	I0603 13:32:51.574248   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:32:51.607281   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.616423   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.630094   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:32:51.652617   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:32:51.685805   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 13:32:51.720925   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.728239   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.743704   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 13:32:51.766385   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 13:32:51.800222   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 13:32:51.833265   15052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.840489   15052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.853789   15052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 13:32:51.875679   15052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:32:51.910153   15052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:32:51.918378   15052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:32:51.918700   15052 kubeadm.go:928] updating node {m03 172.22.150.43 8443 v1.30.1 docker true true} ...
	I0603 13:32:51.918700   15052 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-149700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.150.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:32:51.918700   15052 kube-vip.go:115] generating kube-vip config ...
	I0603 13:32:51.931248   15052 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 13:32:51.960322   15052 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 13:32:51.960487   15052 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.22.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 13:32:51.973317   15052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:32:51.996573   15052 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 13:32:52.009688   15052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 13:32:52.027813   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:32:52.027813   15052 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 13:32:52.027813   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:32:52.044153   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:32:52.044430   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 13:32:52.045057   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 13:32:52.069046   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 13:32:52.069134   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 13:32:52.069250   15052 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:32:52.069250   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 13:32:52.069250   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 13:32:52.086385   15052 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 13:32:52.132720   15052 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 13:32:52.132720   15052 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 13:32:53.429103   15052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 13:32:53.453714   15052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 13:32:53.492071   15052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:32:53.525152   15052 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 13:32:53.572452   15052 ssh_runner.go:195] Run: grep 172.22.159.254	control-plane.minikube.internal$ /etc/hosts
	I0603 13:32:53.579592   15052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:32:53.623744   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:32:53.844290   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:32:53.876045   15052 host.go:66] Checking if "ha-149700" exists ...
	I0603 13:32:53.876673   15052 start.go:316] joinCluster: &{Name:ha-149700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-149700 Namespace:default APIServerHAVIP:172.22.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.153.250 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.154.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:32:53.876673   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 13:32:53.877487   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-149700 ).state
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:56.104756   15052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-149700 ).networkadapters[0]).ipaddresses[0]
	I0603 13:32:58.743376   15052 main.go:141] libmachine: [stdout =====>] : 172.22.153.250
	
	I0603 13:32:58.743467   15052 main.go:141] libmachine: [stderr =====>] : 
	I0603 13:32:58.743467   15052 sshutil.go:53] new ssh client: &{IP:172.22.153.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-149700\id_rsa Username:docker}
	I0603 13:32:58.979028   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1022775s)
	I0603 13:32:58.979096   15052 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:32:58.979173   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oazovl.bojr37tgui3yqu3q --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m03 --control-plane --apiserver-advertise-address=172.22.150.43 --apiserver-bind-port=8443"
	I0603 13:33:44.496870   15052 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oazovl.bojr37tgui3yqu3q --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-149700-m03 --control-plane --apiserver-advertise-address=172.22.150.43 --apiserver-bind-port=8443": (45.5172039s)
	I0603 13:33:44.496988   15052 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 13:33:45.382234   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-149700-m03 minikube.k8s.io/updated_at=2024_06_03T13_33_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-149700 minikube.k8s.io/primary=false
	I0603 13:33:45.554480   15052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-149700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 13:33:45.703262   15052 start.go:318] duration metric: took 51.8261692s to joinCluster
	I0603 13:33:45.703461   15052 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.22.150.43 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 13:33:45.703761   15052 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:33:45.708788   15052 out.go:177] * Verifying Kubernetes components...
	I0603 13:33:45.727224   15052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:33:46.177445   15052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:33:46.215937   15052 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:33:46.216755   15052 kapi.go:59] client config for ha-149700: &rest.Config{Host:"https://172.22.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-149700\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 13:33:46.216955   15052 kubeadm.go:477] Overriding stale ClientConfig host https://172.22.159.254:8443 with https://172.22.153.250:8443
	I0603 13:33:46.217874   15052 node_ready.go:35] waiting up to 6m0s for node "ha-149700-m03" to be "Ready" ...
	I0603 13:33:46.217874   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:46.217874   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:46.217874   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:46.217874   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:46.232804   15052 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 13:33:46.723993   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:46.724074   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:46.724074   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:46.724074   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:46.728523   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:47.228685   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:47.228953   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:47.228953   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:47.228953   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:47.238208   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:33:47.719946   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:47.719946   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:47.720021   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:47.720021   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:47.725222   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:48.221181   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:48.221181   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:48.221181   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:48.221181   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:48.226645   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:48.227550   15052 node_ready.go:53] node "ha-149700-m03" has status "Ready":"False"
	I0603 13:33:48.728346   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:48.728346   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:48.728346   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:48.728346   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:48.733666   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:49.218876   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:49.218876   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:49.218876   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:49.218876   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:49.223158   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:49.726805   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:49.727069   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:49.727069   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:49.727069   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:49.731537   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:50.228635   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:50.228635   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:50.228967   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:50.228967   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:50.235493   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:50.236263   15052 node_ready.go:53] node "ha-149700-m03" has status "Ready":"False"
	I0603 13:33:50.730586   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:50.730645   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:50.730645   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:50.730645   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:50.735414   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:51.233044   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:51.233197   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:51.233197   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:51.233197   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:51.236446   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:51.727570   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:51.727570   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:51.727708   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:51.727708   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:51.733077   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.225949   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:52.225949   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.225949   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.225949   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.231429   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.731836   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:52.731930   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.731930   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.731930   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.736107   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.738736   15052 node_ready.go:49] node "ha-149700-m03" has status "Ready":"True"
	I0603 13:33:52.738736   15052 node_ready.go:38] duration metric: took 6.5208099s for node "ha-149700-m03" to be "Ready" ...
	I0603 13:33:52.738736   15052 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:33:52.738942   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:33:52.738942   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.738942   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.738942   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.764416   15052 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0603 13:33:52.777483   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.777483   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6qmlg
	I0603 13:33:52.777483   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.777483   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.777483   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.782833   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:52.784169   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.784278   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.784278   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.784331   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.787510   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:52.788704   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.788776   15052 pod_ready.go:81] duration metric: took 11.2215ms for pod "coredns-7db6d8ff4d-6qmlg" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.788776   15052 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.788899   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ptqqz
	I0603 13:33:52.788939   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.788962   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.788962   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.793266   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.794278   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.794325   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.794378   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.794378   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.801557   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:33:52.804921   15052 pod_ready.go:92] pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.805081   15052 pod_ready.go:81] duration metric: took 16.3042ms for pod "coredns-7db6d8ff4d-ptqqz" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.805081   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.805533   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700
	I0603 13:33:52.805533   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.805628   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.805628   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.815090   15052 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0603 13:33:52.815881   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:52.816223   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.816258   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.816258   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.819441   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:52.820361   15052 pod_ready.go:92] pod "etcd-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.820439   15052 pod_ready.go:81] duration metric: took 15.358ms for pod "etcd-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.820501   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.820608   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m02
	I0603 13:33:52.820633   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.820672   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.820672   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.826727   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.827290   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:52.827290   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.827290   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.827290   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.831678   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:52.833057   15052 pod_ready.go:92] pod "etcd-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:52.833119   15052 pod_ready.go:81] duration metric: took 12.6175ms for pod "etcd-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.833119   15052 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:52.936553   15052 request.go:629] Waited for 103.1539ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:52.936736   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:52.936736   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:52.936736   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:52.936736   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:52.940852   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.140964   15052 request.go:629] Waited for 197.7085ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.141034   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.141118   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.141118   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.141118   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.146081   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.346290   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:53.346644   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.346644   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.346644   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.351303   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.532425   15052 request.go:629] Waited for 179.7093ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.532582   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.532731   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.532766   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.532766   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.537962   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:53.847270   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:53.847270   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.847270   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.847270   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.851844   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:53.941243   15052 request.go:629] Waited for 87.66ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.941390   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:53.941390   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:53.941390   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:53.941453   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:53.947957   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:54.333696   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:54.333771   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.333771   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.333771   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.338723   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:54.340722   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:54.340722   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.340722   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.340722   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.344334   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:54.847538   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:54.847538   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.847538   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.847871   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.853210   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:54.854994   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:54.854994   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:54.854994   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:54.854994   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:54.859298   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:54.860591   15052 pod_ready.go:102] pod "etcd-ha-149700-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 13:33:55.335211   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:55.335211   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.335496   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.335496   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.348871   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:33:55.349970   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:55.349970   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.349970   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.349970   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.353302   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:55.839954   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:55.839954   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.839954   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.839954   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.845120   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:55.846523   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:55.846523   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:55.846523   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:55.846523   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:55.850171   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.344273   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-149700-m03
	I0603 13:33:56.344273   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.344335   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.344335   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.349704   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:56.350668   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:56.350724   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.350724   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.350724   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.354363   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.355598   15052 pod_ready.go:92] pod "etcd-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.355598   15052 pod_ready.go:81] duration metric: took 3.5224505s for pod "etcd-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.355660   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.355660   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700
	I0603 13:33:56.355791   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.355820   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.355820   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.359611   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.360697   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:56.360697   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.360697   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.360697   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.364597   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:56.365599   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.365599   15052 pod_ready.go:81] duration metric: took 9.9386ms for pod "kube-apiserver-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.365599   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.533802   15052 request.go:629] Waited for 168.0495ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:33:56.534294   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m02
	I0603 13:33:56.534294   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.534294   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.534294   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.538893   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:56.736410   15052 request.go:629] Waited for 196.2816ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:56.736410   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:56.736603   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.736603   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.736603   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.742600   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:56.743261   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:56.743261   15052 pod_ready.go:81] duration metric: took 377.6587ms for pod "kube-apiserver-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.743261   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:56.939312   15052 request.go:629] Waited for 195.5876ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m03
	I0603 13:33:56.939312   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-149700-m03
	I0603 13:33:56.939312   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:56.939312   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:56.939312   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:56.944674   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.141564   15052 request.go:629] Waited for 196.0206ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:57.141745   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:57.141745   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.141745   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.141745   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.146932   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.147556   15052 pod_ready.go:92] pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.147556   15052 pod_ready.go:81] duration metric: took 404.2916ms for pod "kube-apiserver-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.147556   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.332461   15052 request.go:629] Waited for 184.5735ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:33:57.332549   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700
	I0603 13:33:57.332614   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.332614   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.332614   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.338573   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.536944   15052 request.go:629] Waited for 197.3655ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:57.536944   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:57.536944   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.536944   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.536944   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.540999   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:57.540999   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.540999   15052 pod_ready.go:81] duration metric: took 393.4403ms for pod "kube-controller-manager-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.540999   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.740879   15052 request.go:629] Waited for 199.878ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:33:57.741102   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m02
	I0603 13:33:57.741102   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.741102   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.741102   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.746898   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.945550   15052 request.go:629] Waited for 198.4235ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:57.945677   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:57.945677   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:57.945677   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:57.945766   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:57.951357   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:57.952195   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:57.952195   15052 pod_ready.go:81] duration metric: took 411.1929ms for pod "kube-controller-manager-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:57.952751   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.134177   15052 request.go:629] Waited for 181.271ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m03
	I0603 13:33:58.134264   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-149700-m03
	I0603 13:33:58.134264   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.134264   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.134470   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.139278   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:58.336533   15052 request.go:629] Waited for 196.0116ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:58.336774   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:58.336774   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.336858   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.336858   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.343168   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:58.343936   15052 pod_ready.go:92] pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:58.344023   15052 pod_ready.go:81] duration metric: took 391.2687ms for pod "kube-controller-manager-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.344107   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.540487   15052 request.go:629] Waited for 196.3086ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:33:58.540877   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9wjpn
	I0603 13:33:58.540877   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.540877   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.540877   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.549835   15052 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 13:33:58.743720   15052 request.go:629] Waited for 192.483ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:58.743991   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:58.744115   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.744187   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.744187   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.749547   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:33:58.751499   15052 pod_ready.go:92] pod "kube-proxy-9wjpn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:58.751600   15052 pod_ready.go:81] duration metric: took 407.3888ms for pod "kube-proxy-9wjpn" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.751600   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pvnfv" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:58.946791   15052 request.go:629] Waited for 194.9025ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvnfv
	I0603 13:33:58.947026   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvnfv
	I0603 13:33:58.947026   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:58.947026   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:58.947163   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:58.951484   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.135887   15052 request.go:629] Waited for 182.1945ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:59.135887   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:33:59.135887   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.136156   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.136191   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.141375   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.142452   15052 pod_ready.go:92] pod "kube-proxy-pvnfv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.142452   15052 pod_ready.go:81] duration metric: took 390.8489ms for pod "kube-proxy-pvnfv" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.142452   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.339259   15052 request.go:629] Waited for 196.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:33:59.339259   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbzvt
	I0603 13:33:59.339259   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.339259   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.339259   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.343217   15052 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 13:33:59.545562   15052 request.go:629] Waited for 200.6889ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:59.545819   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:33:59.545819   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.545819   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.545892   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.550573   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:33:59.551709   15052 pod_ready.go:92] pod "kube-proxy-vbzvt" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.551778   15052 pod_ready.go:81] duration metric: took 409.2254ms for pod "kube-proxy-vbzvt" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.551778   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.735623   15052 request.go:629] Waited for 183.5049ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:33:59.735623   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700
	I0603 13:33:59.735869   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.735869   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.735869   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.742243   15052 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 13:33:59.942917   15052 request.go:629] Waited for 199.9159ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:59.942917   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700
	I0603 13:33:59.942917   15052 round_trippers.go:469] Request Headers:
	I0603 13:33:59.942917   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:33:59.942917   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:33:59.956085   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:33:59.956783   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700" in "kube-system" namespace has status "Ready":"True"
	I0603 13:33:59.956869   15052 pod_ready.go:81] duration metric: took 405.0877ms for pod "kube-scheduler-ha-149700" in "kube-system" namespace to be "Ready" ...
	I0603 13:33:59.956899   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.147232   15052 request.go:629] Waited for 190.1461ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:34:00.147640   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m02
	I0603 13:34:00.147640   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.147640   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.147780   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.153075   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:00.335010   15052 request.go:629] Waited for 180.1376ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:34:00.335214   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m02
	I0603 13:34:00.335214   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.335214   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.335214   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.339598   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:00.341105   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 13:34:00.341105   15052 pod_ready.go:81] duration metric: took 384.202ms for pod "kube-scheduler-ha-149700-m02" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.341105   15052 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.537741   15052 request.go:629] Waited for 196.6347ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m03
	I0603 13:34:00.537741   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-149700-m03
	I0603 13:34:00.537741   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.537741   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.537741   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.542743   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:00.738683   15052 request.go:629] Waited for 194.3897ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:34:00.738909   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes/ha-149700-m03
	I0603 13:34:00.738909   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.739035   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.739035   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.743214   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:00.744846   15052 pod_ready.go:92] pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 13:34:00.744916   15052 pod_ready.go:81] duration metric: took 403.8078ms for pod "kube-scheduler-ha-149700-m03" in "kube-system" namespace to be "Ready" ...
	I0603 13:34:00.744916   15052 pod_ready.go:38] duration metric: took 8.0061142s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:34:00.745033   15052 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:34:00.757859   15052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:34:00.788501   15052 api_server.go:72] duration metric: took 15.0847816s to wait for apiserver process to appear ...
	I0603 13:34:00.788501   15052 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:34:00.788577   15052 api_server.go:253] Checking apiserver healthz at https://172.22.153.250:8443/healthz ...
	I0603 13:34:00.798814   15052 api_server.go:279] https://172.22.153.250:8443/healthz returned 200:
	ok
	I0603 13:34:00.799227   15052 round_trippers.go:463] GET https://172.22.153.250:8443/version
	I0603 13:34:00.799227   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.799227   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.799227   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.800430   15052 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 13:34:00.800977   15052 api_server.go:141] control plane version: v1.30.1
	I0603 13:34:00.801059   15052 api_server.go:131] duration metric: took 12.4813ms to wait for apiserver health ...
	I0603 13:34:00.801093   15052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:34:00.940914   15052 request.go:629] Waited for 139.6746ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:00.941015   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:00.941165   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:00.941165   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:00.941165   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:00.952052   15052 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 13:34:00.962948   15052 system_pods.go:59] 24 kube-system pods found
	I0603 13:34:00.962948   15052 system_pods.go:61] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "etcd-ha-149700-m03" [ff62797d-c9d4-4355-8357-9c8682ac707e] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kindnet-v4w4l" [3df37f74-f7b9-43c1-854b-38ab7224fc66] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-apiserver-ha-149700-m03" [290fcfac-d887-4444-b19c-2662b0e2cdf0] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-controller-manager-ha-149700-m03" [9fe1e19c-fd2d-48fe-8fda-7e327c91cabb] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-pvnfv" [6daa679a-0264-4142-9ecb-a87d769db00b] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-scheduler-ha-149700-m03" [d3bec3fd-3af2-4551-96b6-7fdffd794600] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "kube-vip-ha-149700-m03" [0c108f8d-1b10-466e-b210-7ef8a84bc9c2] Running
	I0603 13:34:00.962948   15052 system_pods.go:61] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:34:00.962948   15052 system_pods.go:74] duration metric: took 161.8538ms to wait for pod list to return data ...
	I0603 13:34:00.962948   15052 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:34:01.144741   15052 request.go:629] Waited for 181.0052ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:34:01.144741   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/default/serviceaccounts
	I0603 13:34:01.144741   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.144741   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.144741   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.149371   15052 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 13:34:01.149908   15052 default_sa.go:45] found service account: "default"
	I0603 13:34:01.150032   15052 default_sa.go:55] duration metric: took 186.9583ms for default service account to be created ...
	I0603 13:34:01.150032   15052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:34:01.346348   15052 request.go:629] Waited for 196.2316ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:01.346587   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/namespaces/kube-system/pods
	I0603 13:34:01.346587   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.346587   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.346675   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.360179   15052 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 13:34:01.370303   15052 system_pods.go:86] 24 kube-system pods found
	I0603 13:34:01.370303   15052 system_pods.go:89] "coredns-7db6d8ff4d-6qmlg" [e5596259-8a05-48a0-93ca-c46f8d67a213] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "coredns-7db6d8ff4d-ptqqz" [5f7a6070-d736-4701-a5e0-98dd4e01948a] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700" [e75a16ce-11b4-4e7a-8d3d-abfbdb69c3dd] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700-m02" [25624fa9-12e8-4bcf-be97-56ceba40e44d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "etcd-ha-149700-m03" [ff62797d-c9d4-4355-8357-9c8682ac707e] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-l2cph" [c145f100-1464-40fa-a165-1a92800515b0] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-qphhc" [d0b48843-531c-43f1-996a-9ac482b9e838] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kindnet-v4w4l" [3df37f74-f7b9-43c1-854b-38ab7224fc66] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700" [9421ffa6-ceee-4b30-ab28-5b00c6181dd2] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700-m02" [027bc9b6-d88a-4ee9-bd31-22e3f8ca7463] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-apiserver-ha-149700-m03" [290fcfac-d887-4444-b19c-2662b0e2cdf0] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700" [b812ec80-4942-448f-8017-2440b3f07ce8] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m02" [c8ad5667-4fec-4425-b553-42ff3f8a3439] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-controller-manager-ha-149700-m03" [9fe1e19c-fd2d-48fe-8fda-7e327c91cabb] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-9wjpn" [5f53e110-b18c-4255-963d-efecaa1f7f2d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-pvnfv" [6daa679a-0264-4142-9ecb-a87d769db00b] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-proxy-vbzvt" [b025c683-b092-43ca-8dce-b4d687f5eb2d] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700" [db7d2a13-c940-49f5-bf6f-d5077e3f223c] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700-m02" [8174835b-f95e-41a3-b5ef-f96197fd45dc] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-scheduler-ha-149700-m03" [d3bec3fd-3af2-4551-96b6-7fdffd794600] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700" [f84f708c-1c96-438f-893e-1a3ed1c16e3a] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700-m02" [d238fd54-8865-4689-9b0c-cfce80b8b3b4] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "kube-vip-ha-149700-m03" [0c108f8d-1b10-466e-b210-7ef8a84bc9c2] Running
	I0603 13:34:01.370303   15052 system_pods.go:89] "storage-provisioner" [f3d34c4f-12d1-4980-8512-3c80dc9d6047] Running
	I0603 13:34:01.370303   15052 system_pods.go:126] duration metric: took 220.2695ms to wait for k8s-apps to be running ...
	I0603 13:34:01.370303   15052 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:34:01.381898   15052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:34:01.412160   15052 system_svc.go:56] duration metric: took 41.8565ms WaitForService to wait for kubelet
	I0603 13:34:01.412160   15052 kubeadm.go:576] duration metric: took 15.7084362s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:34:01.412160   15052 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:34:01.536385   15052 request.go:629] Waited for 124.1383ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.153.250:8443/api/v1/nodes
	I0603 13:34:01.536555   15052 round_trippers.go:463] GET https://172.22.153.250:8443/api/v1/nodes
	I0603 13:34:01.536616   15052 round_trippers.go:469] Request Headers:
	I0603 13:34:01.536616   15052 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 13:34:01.536616   15052 round_trippers.go:473]     Accept: application/json, */*
	I0603 13:34:01.541875   15052 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 13:34:01.544213   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544452   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544452   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544452   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544452   15052 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:34:01.544576   15052 node_conditions.go:123] node cpu capacity is 2
	I0603 13:34:01.544576   15052 node_conditions.go:105] duration metric: took 132.4149ms to run NodePressure ...
	I0603 13:34:01.544576   15052 start.go:240] waiting for startup goroutines ...
	I0603 13:34:01.544646   15052 start.go:254] writing updated cluster config ...
	I0603 13:34:01.557345   15052 ssh_runner.go:195] Run: rm -f paused
	I0603 13:34:01.694652   15052 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:34:01.699803   15052 out.go:177] * Done! kubectl is now configured to use "ha-149700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530635529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530657529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.530780830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.636458701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.636643803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.639348128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:26:26 ha-149700 dockerd[1320]: time="2024-06-03T13:26:26.639622530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642624291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642961394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.642990295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 dockerd[1320]: time="2024-06-03T13:34:40.644048705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:40 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/33aa4a5311373dc2b150f88764a0d251bc06a7e18caaf64acaa73130d94006cc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 13:34:42 ha-149700 cri-dockerd[1221]: time="2024-06-03T13:34:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593358002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593482403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593524903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:34:42 ha-149700 dockerd[1320]: time="2024-06-03T13:34:42.593685104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 13:35:46 ha-149700 dockerd[1314]: 2024/06/03 13:35:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:46 ha-149700 dockerd[1314]: 2024/06/03 13:35:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:46 ha-149700 dockerd[1314]: 2024/06/03 13:35:46 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:47 ha-149700 dockerd[1314]: 2024/06/03 13:35:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:47 ha-149700 dockerd[1314]: 2024/06/03 13:35:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:47 ha-149700 dockerd[1314]: 2024/06/03 13:35:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:47 ha-149700 dockerd[1314]: 2024/06/03 13:35:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 13:35:47 ha-149700 dockerd[1314]: 2024/06/03 13:35:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2286192dae0b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   33aa4a5311373       busybox-fc5497c4f-4hfj7
	d1e8355be36fb       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   ac5843b669517       coredns-7db6d8ff4d-ptqqz
	8cad5b34eaa07       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   592e41948e3a8       storage-provisioner
	e405991670c39       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   f3e2e2177b00f       coredns-7db6d8ff4d-6qmlg
	139823d9d8d4c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   d3d6215383bcd       kindnet-qphhc
	4879852b10da4       747097150317f                                                                                         27 minutes ago      Running             kube-proxy                0                   20f17f2b0d4dc       kube-proxy-9wjpn
	7a4ce070a4434       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   67eb75fc9ff2f       kube-vip-ha-149700
	f9a72751b1c60       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   9169f118d9b08       kube-apiserver-ha-149700
	962282ca80621       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   0e10627407c81       kube-scheduler-ha-149700
	b491f438ec2f5       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   8ae2f97837c54       kube-controller-manager-ha-149700
	108f442a1dae5       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   c6193e9dd3f2e       etcd-ha-149700
	
	
	==> coredns [d1e8355be36f] <==
	[INFO] 10.244.1.2:36387 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.257424127s
	[INFO] 10.244.0.4:47951 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.415641726s
	[INFO] 10.244.0.4:44854 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158701s
	[INFO] 10.244.0.4:41440 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249501s
	[INFO] 10.244.2.2:37444 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000102001s
	[INFO] 10.244.2.2:57308 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126301s
	[INFO] 10.244.2.2:50804 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010408158s
	[INFO] 10.244.2.2:47435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001106s
	[INFO] 10.244.2.2:60556 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111401s
	[INFO] 10.244.1.2:35827 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079801s
	[INFO] 10.244.1.2:41409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068401s
	[INFO] 10.244.1.2:51750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000057s
	[INFO] 10.244.1.2:54386 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182001s
	[INFO] 10.244.0.4:48087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114501s
	[INFO] 10.244.2.2:42711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071601s
	[INFO] 10.244.2.2:51380 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224401s
	[INFO] 10.244.1.2:47146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203001s
	[INFO] 10.244.0.4:44145 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114101s
	[INFO] 10.244.0.4:52464 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000345502s
	[INFO] 10.244.2.2:35477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001039s
	[INFO] 10.244.2.2:53416 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063601s
	[INFO] 10.244.1.2:58374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215301s
	[INFO] 10.244.1.2:55393 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271402s
	[INFO] 10.244.1.2:59612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165301s
	[INFO] 10.244.1.2:46193 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000253301s
	
	
	==> coredns [e405991670c3] <==
	[INFO] 10.244.1.2:46389 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000093801s
	[INFO] 10.244.0.4:33523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000202701s
	[INFO] 10.244.0.4:40321 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208601s
	[INFO] 10.244.0.4:59204 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029309764s
	[INFO] 10.244.0.4:35216 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113201s
	[INFO] 10.244.0.4:43236 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001976811s
	[INFO] 10.244.2.2:48741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133401s
	[INFO] 10.244.2.2:39388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191301s
	[INFO] 10.244.2.2:55892 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185401s
	[INFO] 10.244.1.2:60903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137601s
	[INFO] 10.244.1.2:51322 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005000729s
	[INFO] 10.244.1.2:46958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000053s
	[INFO] 10.244.1.2:53810 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.4:33768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145401s
	[INFO] 10.244.0.4:51440 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001451s
	[INFO] 10.244.0.4:44295 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000798s
	[INFO] 10.244.2.2:51082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179701s
	[INFO] 10.244.2.2:37686 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053s
	[INFO] 10.244.1.2:51508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191801s
	[INFO] 10.244.1.2:39529 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064801s
	[INFO] 10.244.1.2:39194 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100101s
	[INFO] 10.244.0.4:43140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104701s
	[INFO] 10.244.0.4:33173 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000352202s
	[INFO] 10.244.2.2:44233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192601s
	[INFO] 10.244.2.2:41640 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000391602s
	
	
	==> describe nodes <==
	Name:               ha-149700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_26_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:25:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:53:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:50:20 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:50:20 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:50:20 +0000   Mon, 03 Jun 2024 13:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:50:20 +0000   Mon, 03 Jun 2024 13:26:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.153.250
	  Hostname:    ha-149700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 94538886f55f4cbdb7bcdf9f8a4de860
	  System UUID:                d42864a6-608c-2a4a-b3c1-27f966e2091d
	  Boot ID:                    f47c949f-9fae-4529-afa5-365efb5bd803
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hfj7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-6qmlg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-ptqqz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-149700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-qphhc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-149700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-149700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-9wjpn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-149700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-149700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-149700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-149700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-149700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-149700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-149700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-149700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-149700 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-149700 event: Registered Node ha-149700 in Controller
	
	
	Name:               ha-149700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_29_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:29:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:52:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 13:50:10 +0000   Mon, 03 Jun 2024 13:52:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 13:50:10 +0000   Mon, 03 Jun 2024 13:52:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 13:50:10 +0000   Mon, 03 Jun 2024 13:52:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 13:50:10 +0000   Mon, 03 Jun 2024 13:52:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.22.154.57
	  Hostname:    ha-149700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6ae065fc4f949549aef64be5ac14c55
	  System UUID:                0944961d-e844-8341-bc02-bc74b0797070
	  Boot ID:                    71ed6a23-125e-422f-b4c4-85b45c319b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vzbnc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-149700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-l2cph                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-149700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-149700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-vbzvt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-149700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-149700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-149700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-149700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-149700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-149700-m02 event: Registered Node ha-149700-m02 in Controller
	  Normal  NodeNotReady             34s                node-controller  Node ha-149700-m02 status is now: NodeNotReady
	
	
	Name:               ha-149700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_33_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:33:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:50:28 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:50:28 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:50:28 +0000   Mon, 03 Jun 2024 13:33:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:50:28 +0000   Mon, 03 Jun 2024 13:33:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.150.43
	  Hostname:    ha-149700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e056ce9e2ad145808a2a175e96b6ed65
	  System UUID:                afbef1cc-fa5e-564f-9694-5a0a2250e53c
	  Boot ID:                    a6517f5c-10bf-400e-bc82-3672ccf32932
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fkkts                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-149700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-v4w4l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-149700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-149700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-pvnfv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-149700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-149700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-149700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-149700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-149700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-149700-m03 event: Registered Node ha-149700-m03 in Controller
	
	
	Name:               ha-149700-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-149700-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-149700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_39_03_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:39:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-149700-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:53:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:49:55 +0000   Mon, 03 Jun 2024 13:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:49:55 +0000   Mon, 03 Jun 2024 13:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:49:55 +0000   Mon, 03 Jun 2024 13:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:49:55 +0000   Mon, 03 Jun 2024 13:39:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.158.137
	  Hostname:    ha-149700-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 58afe81bc89442de9ac959a96fb63402
	  System UUID:                18d6c6a4-10f9-a44e-9484-bbace1ec84f5
	  Boot ID:                    b31f010d-896e-4e63-9f51-06c0bcfb7e5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hzhlr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-cv5zv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-149700-m04 event: Registered Node ha-149700-m04 in Controller
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-149700-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-149700-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-149700-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-149700-m04 event: Registered Node ha-149700-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-149700-m04 event: Registered Node ha-149700-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-149700-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun 3 13:24] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.226516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +47.294251] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.166575] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Jun 3 13:25] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.096039] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.504710] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.190765] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.209889] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +2.758049] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.180393] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.184217] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.248547] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[ +11.490901] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.095424] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.377453] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +5.250046] systemd-fstab-generator[1698]: Ignoring "noauto" option for root device
	[  +0.102640] kauditd_printk_skb: 73 callbacks suppressed
	[ +10.163728] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.141800] kauditd_printk_skb: 72 callbacks suppressed
	[Jun 3 13:26] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.688628] kauditd_printk_skb: 29 callbacks suppressed
	[Jun 3 13:29] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [108f442a1dae] <==
	{"level":"warn","ts":"2024-06-03T13:53:17.327809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.394927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.397798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.521263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.526876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.527275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.540998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.551994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.557985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.563443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.582313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.598547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.60775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.61336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.618002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.627559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.631164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.64815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.657495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.663091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.668166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.677355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.6893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.702983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T13:53:17.7283Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a1e86fbc2f15d2e8","from":"a1e86fbc2f15d2e8","remote-peer-id":"fc53ddd60570814a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:53:17 up 29 min,  0 users,  load average: 0.43, 0.43, 0.38
	Linux ha-149700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [139823d9d8d4] <==
	I0603 13:52:44.402548       1 main.go:250] Node ha-149700-m04 has CIDR [10.244.3.0/24] 
	I0603 13:52:54.419619       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:52:54.420634       1 main.go:227] handling current node
	I0603 13:52:54.420672       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:52:54.420683       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:52:54.420920       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:52:54.421037       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:52:54.421276       1 main.go:223] Handling node with IPs: map[172.22.158.137:{}]
	I0603 13:52:54.421291       1 main.go:250] Node ha-149700-m04 has CIDR [10.244.3.0/24] 
	I0603 13:53:04.434471       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:53:04.434551       1 main.go:227] handling current node
	I0603 13:53:04.434565       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:53:04.434572       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:53:04.434894       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:53:04.434985       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:53:04.435052       1 main.go:223] Handling node with IPs: map[172.22.158.137:{}]
	I0603 13:53:04.435161       1 main.go:250] Node ha-149700-m04 has CIDR [10.244.3.0/24] 
	I0603 13:53:14.451715       1 main.go:223] Handling node with IPs: map[172.22.153.250:{}]
	I0603 13:53:14.451960       1 main.go:227] handling current node
	I0603 13:53:14.452064       1 main.go:223] Handling node with IPs: map[172.22.154.57:{}]
	I0603 13:53:14.452292       1 main.go:250] Node ha-149700-m02 has CIDR [10.244.1.0/24] 
	I0603 13:53:14.452504       1 main.go:223] Handling node with IPs: map[172.22.150.43:{}]
	I0603 13:53:14.452771       1 main.go:250] Node ha-149700-m03 has CIDR [10.244.2.0/24] 
	I0603 13:53:14.452950       1 main.go:223] Handling node with IPs: map[172.22.158.137:{}]
	I0603 13:53:14.453061       1 main.go:250] Node ha-149700-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f9a72751b1c6] <==
	I0603 13:26:00.729578       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 13:26:00.766413       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:26:12.498378       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 13:26:12.841563       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0603 13:33:38.541662       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.9µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0603 13:33:38.549938       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 13:33:38.554100       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 13:33:38.558412       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 13:33:38.559120       1 timeout.go:142] post-timeout activity - time-elapsed: 114.25692ms, PATCH "/api/v1/namespaces/default/events/ha-149700-m03.17d581dca38320b3" result: <nil>
	E0603 13:34:46.296264       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61870: use of closed network connection
	E0603 13:34:46.824780       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61872: use of closed network connection
	E0603 13:34:48.631001       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61874: use of closed network connection
	E0603 13:34:49.604728       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61876: use of closed network connection
	E0603 13:34:50.139760       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61878: use of closed network connection
	E0603 13:34:50.680814       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61880: use of closed network connection
	E0603 13:34:51.225296       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61882: use of closed network connection
	E0603 13:34:51.750632       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61884: use of closed network connection
	E0603 13:34:52.268301       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61886: use of closed network connection
	E0603 13:34:53.177135       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61890: use of closed network connection
	E0603 13:35:03.720131       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61892: use of closed network connection
	E0603 13:35:04.229291       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61895: use of closed network connection
	E0603 13:35:14.735704       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61897: use of closed network connection
	E0603 13:35:15.253550       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61900: use of closed network connection
	E0603 13:35:25.799651       1 conn.go:339] Error on socket receive: read tcp 172.22.159.254:8443->172.22.144.1:61902: use of closed network connection
	W0603 13:52:28.669351       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.150.43 172.22.153.250]
	
	
	==> kube-controller-manager [b491f438ec2f] <==
	I0603 13:29:45.627650       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-149700-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:29:47.436614       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-149700-m02"
	I0603 13:33:37.611878       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-149700-m03\" does not exist"
	I0603 13:33:37.633390       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-149700-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:33:42.828162       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-149700-m03"
	I0603 13:34:39.604426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.116437ms"
	I0603 13:34:39.751644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="146.665316ms"
	I0603 13:34:40.092809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="341.069592ms"
	I0603 13:34:40.334023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="241.053626ms"
	I0603 13:34:40.402709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.155648ms"
	I0603 13:34:40.402815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.2µs"
	I0603 13:34:40.803526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.001µs"
	I0603 13:34:42.914656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.353807ms"
	I0603 13:34:42.915280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.601µs"
	I0603 13:34:43.053326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.748399ms"
	I0603 13:34:43.054254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.601µs"
	I0603 13:34:43.451459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.034606ms"
	I0603 13:34:43.452038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.801µs"
	I0603 13:39:02.219664       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-149700-m04\" does not exist"
	I0603 13:39:02.268130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-149700-m04" podCIDRs=["10.244.3.0/24"]
	I0603 13:39:02.935555       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-149700-m04"
	I0603 13:39:45.228132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-149700-m04"
	I0603 13:52:43.307974       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-149700-m04"
	I0603 13:52:44.622597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.906361ms"
	I0603 13:52:44.626996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.601µs"
	
	
	==> kube-proxy [4879852b10da] <==
	I0603 13:26:14.358495       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:26:14.373061       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.153.250"]
	I0603 13:26:14.425474       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:26:14.425650       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:26:14.425675       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:26:14.433307       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:26:14.433745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:26:14.434072       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:26:14.435488       1 config.go:192] "Starting service config controller"
	I0603 13:26:14.436145       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:26:14.436725       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:26:14.436983       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:26:14.445276       1 config.go:319] "Starting node config controller"
	I0603 13:26:14.445289       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:26:14.537663       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:26:14.545512       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:26:14.545597       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [962282ca8062] <==
	W0603 13:25:57.166877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.166914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.177724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.177917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.363313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 13:25:57.363982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 13:25:57.368106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:25:57.368158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:25:57.452000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:25:57.452127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 13:25:57.560458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:25:57.560721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:25:57.568759       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:25:57.569059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:25:57.615976       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:25:57.616025       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:26:00.768329       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 13:33:37.757427       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v4w4l\": pod kindnet-v4w4l is already assigned to node \"ha-149700-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-v4w4l" node="ha-149700-m03"
	E0603 13:33:37.759464       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3df37f74-f7b9-43c1-854b-38ab7224fc66(kube-system/kindnet-v4w4l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-v4w4l"
	E0603 13:33:37.759693       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v4w4l\": pod kindnet-v4w4l is already assigned to node \"ha-149700-m03\"" pod="kube-system/kindnet-v4w4l"
	I0603 13:33:37.760020       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v4w4l" node="ha-149700-m03"
	E0603 13:34:39.543023       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vzbnc\": pod busybox-fc5497c4f-vzbnc is already assigned to node \"ha-149700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vzbnc" node="ha-149700-m02"
	E0603 13:34:39.543170       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aef956f6-f05c-45d8-b772-784ff2b201df(default/busybox-fc5497c4f-vzbnc) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-vzbnc"
	E0603 13:34:39.543327       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vzbnc\": pod busybox-fc5497c4f-vzbnc is already assigned to node \"ha-149700-m02\"" pod="default/busybox-fc5497c4f-vzbnc"
	I0603 13:34:39.543593       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-vzbnc" node="ha-149700-m02"
	
	
	==> kubelet <==
	Jun 03 13:49:00 ha-149700 kubelet[2205]: E0603 13:49:00.851917    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:49:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:49:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:49:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:49:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:50:00 ha-149700 kubelet[2205]: E0603 13:50:00.850144    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:50:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:50:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:50:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:50:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:51:00 ha-149700 kubelet[2205]: E0603 13:51:00.849659    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:51:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:51:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:51:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:51:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:52:00 ha-149700 kubelet[2205]: E0603 13:52:00.851804    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:52:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:52:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:52:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:52:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:53:00 ha-149700 kubelet[2205]: E0603 13:53:00.849776    2205 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:53:00 ha-149700 kubelet[2205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:53:00 ha-149700 kubelet[2205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:53:00 ha-149700 kubelet[2205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:53:00 ha-149700 kubelet[2205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:53:09.384202    2808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-149700 -n ha-149700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-149700 -n ha-149700: (12.570409s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-149700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (94.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- sh -c "ping -c 1 172.22.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- sh -c "ping -c 1 172.22.144.1": exit status 1 (10.5107259s)

                                                
                                                
-- stdout --
	PING 172.22.144.1 (172.22.144.1): 56 data bytes
	
	--- 172.22.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:31:26.730448   13676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.22.144.1) from pod (busybox-fc5497c4f-mjhcf): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- sh -c "ping -c 1 172.22.144.1"
E0603 14:31:38.011028   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- sh -c "ping -c 1 172.22.144.1": exit status 1 (10.5091028s)

                                                
                                                
-- stdout --
	PING 172.22.144.1 (172.22.144.1): 56 data bytes
	
	--- 172.22.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:31:37.754883   10020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.22.144.1) from pod (busybox-fc5497c4f-n2t5d): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-720500 -n multinode-720500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-720500 -n multinode-720500: (12.2657208s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 logs -n 25: (8.6077716s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-773400 ssh -- ls                    | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:20 UTC | 03 Jun 24 14:20 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-773400                           | mount-start-1-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:20 UTC | 03 Jun 24 14:20 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-773400 ssh -- ls                    | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:20 UTC | 03 Jun 24 14:20 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-773400                           | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:20 UTC | 03 Jun 24 14:21 UTC |
	| start   | -p mount-start-2-773400                           | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:21 UTC | 03 Jun 24 14:23 UTC |
	| mount   | C:\Users\jenkins.minikube3:/minikube-host         | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:23 UTC |                     |
	|         | --profile mount-start-2-773400 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-773400 ssh -- ls                    | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:23 UTC | 03 Jun 24 14:23 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-773400                           | mount-start-2-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:23 UTC | 03 Jun 24 14:24 UTC |
	| delete  | -p mount-start-1-773400                           | mount-start-1-773400 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:24 UTC | 03 Jun 24 14:24 UTC |
	| start   | -p multinode-720500                               | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:24 UTC | 03 Jun 24 14:30 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- apply -f                   | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- rollout                    | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- get pods -o                | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- get pods -o                | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-mjhcf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-n2t5d --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-mjhcf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-n2t5d --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-mjhcf -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-n2t5d -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- get pods -o                | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-mjhcf                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC |                     |
	|         | busybox-fc5497c4f-mjhcf -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.22.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC | 03 Jun 24 14:31 UTC |
	|         | busybox-fc5497c4f-n2t5d                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-720500 -- exec                       | multinode-720500     | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:31 UTC |                     |
	|         | busybox-fc5497c4f-n2t5d -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.22.144.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 14:24:11
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 14:24:11.306773   11176 out.go:291] Setting OutFile to fd 1480 ...
	I0603 14:24:11.307154   11176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:24:11.307154   11176 out.go:304] Setting ErrFile to fd 1052...
	I0603 14:24:11.307680   11176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:24:11.328418   11176 out.go:298] Setting JSON to false
	I0603 14:24:11.331452   11176 start.go:129] hostinfo: {"hostname":"minikube3","uptime":25579,"bootTime":1717399071,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 14:24:11.331452   11176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 14:24:11.339026   11176 out.go:177] * [multinode-720500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 14:24:11.347379   11176 notify.go:220] Checking for updates...
	I0603 14:24:11.352742   11176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:24:11.358770   11176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:24:11.364822   11176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 14:24:11.370186   11176 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:24:11.377381   11176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:24:11.382159   11176 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:24:11.383162   11176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:24:16.901927   11176 out.go:177] * Using the hyperv driver based on user configuration
	I0603 14:24:16.905299   11176 start.go:297] selected driver: hyperv
	I0603 14:24:16.905299   11176 start.go:901] validating driver "hyperv" against <nil>
	I0603 14:24:16.905299   11176 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:24:16.956281   11176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 14:24:16.957664   11176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:24:16.957743   11176 cni.go:84] Creating CNI manager for ""
	I0603 14:24:16.957743   11176 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 14:24:16.957743   11176 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 14:24:16.957934   11176 start.go:340] cluster config:
	{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:24:16.958303   11176 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:24:16.961532   11176 out.go:177] * Starting "multinode-720500" primary control-plane node in "multinode-720500" cluster
	I0603 14:24:16.965487   11176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:24:16.965526   11176 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 14:24:16.965526   11176 cache.go:56] Caching tarball of preloaded images
	I0603 14:24:16.965526   11176 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:24:16.965526   11176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:24:16.965526   11176 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:24:16.966574   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json: {Name:mk6284aac42dc44b759179f7a959c487f170386b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:24:16.967534   11176 start.go:360] acquireMachinesLock for multinode-720500: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:24:16.967534   11176 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-720500"
	I0603 14:24:16.967534   11176 start.go:93] Provisioning new machine with config: &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 14:24:16.967534   11176 start.go:125] createHost starting for "" (driver="hyperv")
	I0603 14:24:16.970629   11176 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 14:24:16.971686   11176 start.go:159] libmachine.API.Create for "multinode-720500" (driver="hyperv")
	I0603 14:24:16.971686   11176 client.go:168] LocalClient.Create starting
	I0603 14:24:16.971686   11176 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 14:24:16.971686   11176 main.go:141] libmachine: Decoding PEM data...
	I0603 14:24:16.971686   11176 main.go:141] libmachine: Parsing certificate...
	I0603 14:24:16.972692   11176 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 14:24:16.972692   11176 main.go:141] libmachine: Decoding PEM data...
	I0603 14:24:16.972692   11176 main.go:141] libmachine: Parsing certificate...
	I0603 14:24:16.972692   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 14:24:19.072136   11176 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 14:24:19.072136   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:19.072136   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 14:24:20.821787   11176 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 14:24:20.821936   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:20.821936   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 14:24:22.318163   11176 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 14:24:22.318992   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:22.318992   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 14:24:25.954608   11176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 14:24:25.955293   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:25.957735   11176 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 14:24:26.458613   11176 main.go:141] libmachine: Creating SSH key...
	I0603 14:24:26.681803   11176 main.go:141] libmachine: Creating VM...
	I0603 14:24:26.682294   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 14:24:29.564386   11176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 14:24:29.565276   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:29.565276   11176 main.go:141] libmachine: Using switch "Default Switch"
	I0603 14:24:29.565276   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 14:24:31.295384   11176 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 14:24:31.295464   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:31.295464   11176 main.go:141] libmachine: Creating VHD
	I0603 14:24:31.295729   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 14:24:35.086267   11176 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F9556C5D-1C3B-49C0-8D5F-26278C2F45A9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 14:24:35.086655   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:35.086655   11176 main.go:141] libmachine: Writing magic tar header
	I0603 14:24:35.086655   11176 main.go:141] libmachine: Writing SSH key tar header
	I0603 14:24:35.095263   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 14:24:38.276627   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:38.276627   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:38.277737   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\disk.vhd' -SizeBytes 20000MB
	I0603 14:24:40.843234   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:40.843234   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:40.843897   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-720500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 14:24:44.587260   11176 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-720500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 14:24:44.587651   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:44.587808   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-720500 -DynamicMemoryEnabled $false
	I0603 14:24:46.851678   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:46.851840   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:46.851907   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-720500 -Count 2
	I0603 14:24:49.035945   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:49.036782   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:49.037152   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-720500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\boot2docker.iso'
	I0603 14:24:51.637928   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:51.637928   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:51.638576   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-720500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\disk.vhd'
	I0603 14:24:54.286619   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:54.287242   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:54.287242   11176 main.go:141] libmachine: Starting VM...
	I0603 14:24:54.287242   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500
	I0603 14:24:57.370437   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:24:57.370437   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:57.370749   11176 main.go:141] libmachine: Waiting for host to start...
	I0603 14:24:57.370749   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:24:59.686803   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:24:59.686803   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:24:59.687591   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:02.383767   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:25:02.384767   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:03.396795   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:05.636603   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:05.637763   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:05.637888   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:08.185403   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:25:08.186275   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:09.187696   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:11.397366   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:11.397366   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:11.398225   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:13.890204   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:25:13.890204   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:14.896948   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:17.108884   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:17.108884   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:17.109156   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:19.679446   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:25:19.679446   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:20.684191   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:22.912686   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:22.912686   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:22.913326   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:25.512302   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:25.512451   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:25.512451   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:27.675658   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:27.675761   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:27.675761   11176 machine.go:94] provisionDockerMachine start ...
	I0603 14:25:27.675761   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:29.917874   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:29.917874   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:29.918725   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:32.466070   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:32.466070   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:32.472760   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:25:32.483988   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:25:32.483988   11176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:25:32.623375   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:25:32.623375   11176 buildroot.go:166] provisioning hostname "multinode-720500"
	I0603 14:25:32.623930   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:34.734796   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:34.734796   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:34.734796   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:37.347758   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:37.348809   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:37.354874   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:25:37.355344   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:25:37.355388   11176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500 && echo "multinode-720500" | sudo tee /etc/hostname
	I0603 14:25:37.529499   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500
	
	I0603 14:25:37.529559   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:39.718017   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:39.718017   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:39.718827   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:42.312357   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:42.312357   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:42.318847   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:25:42.319721   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:25:42.319788   11176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:25:42.471545   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:25:42.471545   11176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:25:42.471545   11176 buildroot.go:174] setting up certificates
	I0603 14:25:42.471545   11176 provision.go:84] configureAuth start
	I0603 14:25:42.472155   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:44.576226   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:44.576348   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:44.576426   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:47.106535   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:47.106535   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:47.107273   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:49.268109   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:49.268109   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:49.268414   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:51.811561   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:51.812434   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:51.812434   11176 provision.go:143] copyHostCerts
	I0603 14:25:51.812721   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:25:51.812721   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:25:51.812721   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:25:51.813420   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:25:51.814784   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:25:51.815096   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:25:51.815191   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:25:51.815444   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:25:51.816559   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:25:51.817025   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:25:51.817025   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:25:51.817025   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:25:51.817945   11176 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500 san=[127.0.0.1 172.22.150.195 localhost minikube multinode-720500]
	I0603 14:25:51.941054   11176 provision.go:177] copyRemoteCerts
	I0603 14:25:51.954090   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:25:51.954090   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:54.085630   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:54.086687   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:54.086687   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:25:56.618586   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:25:56.619065   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:56.619145   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:25:56.723429   11176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7692135s)
	I0603 14:25:56.723429   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:25:56.723909   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:25:56.769555   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:25:56.769632   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 14:25:56.817341   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:25:56.817341   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 14:25:56.861342   11176 provision.go:87] duration metric: took 14.3896794s to configureAuth
	I0603 14:25:56.861342   11176 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:25:56.862400   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:25:56.862400   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:25:58.979023   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:25:58.979089   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:25:58.979089   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:01.537318   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:01.537532   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:01.543357   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:26:01.544141   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:26:01.544141   11176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:26:01.682837   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:26:01.682837   11176 buildroot.go:70] root file system type: tmpfs
	I0603 14:26:01.683448   11176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:26:01.683750   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:03.801146   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:03.801146   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:03.801731   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:06.389443   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:06.389443   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:06.396372   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:26:06.397156   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:26:06.397156   11176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:26:06.559458   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:26:06.559458   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:08.696660   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:08.696806   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:08.696806   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:11.274168   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:11.275104   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:11.281768   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:26:11.282299   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:26:11.282615   11176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:26:13.411966   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:26:13.411966   11176 machine.go:97] duration metric: took 45.7358303s to provisionDockerMachine
	I0603 14:26:13.411966   11176 client.go:171] duration metric: took 1m56.4393251s to LocalClient.Create
	I0603 14:26:13.411966   11176 start.go:167] duration metric: took 1m56.4393251s to libmachine.API.Create "multinode-720500"
	I0603 14:26:13.411966   11176 start.go:293] postStartSetup for "multinode-720500" (driver="hyperv")
	I0603 14:26:13.411966   11176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:26:13.425099   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:26:13.425099   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:15.570051   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:15.570051   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:15.570275   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:18.129585   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:18.129585   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:18.129585   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:26:18.256622   11176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8314452s)
	I0603 14:26:18.271506   11176 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:26:18.280668   11176 command_runner.go:130] > NAME=Buildroot
	I0603 14:26:18.280668   11176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:26:18.280668   11176 command_runner.go:130] > ID=buildroot
	I0603 14:26:18.280668   11176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:26:18.280668   11176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:26:18.280668   11176 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:26:18.280668   11176 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:26:18.281264   11176 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:26:18.284187   11176 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:26:18.284304   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:26:18.296039   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:26:18.315187   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:26:18.359699   11176 start.go:296] duration metric: took 4.9476919s for postStartSetup
	I0603 14:26:18.363223   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:20.484551   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:20.484551   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:20.485098   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:22.986644   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:22.987752   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:22.987974   11176 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:26:22.990586   11176 start.go:128] duration metric: took 2m6.0220181s to createHost
	I0603 14:26:22.991211   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:25.123818   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:25.124155   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:25.124248   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:27.650531   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:27.651214   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:27.659199   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:26:27.659749   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:26:27.659749   11176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:26:27.793327   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717424787.791965219
	
	I0603 14:26:27.793435   11176 fix.go:216] guest clock: 1717424787.791965219
	I0603 14:26:27.793435   11176 fix.go:229] Guest: 2024-06-03 14:26:27.791965219 +0000 UTC Remote: 2024-06-03 14:26:22.9911413 +0000 UTC m=+131.850648001 (delta=4.800823919s)
	I0603 14:26:27.793555   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:29.923404   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:29.923404   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:29.924271   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:32.510674   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:32.510674   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:32.516773   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:26:32.517289   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.150.195 22 <nil> <nil>}
	I0603 14:26:32.517372   11176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717424787
	I0603 14:26:32.663533   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:26:27 UTC 2024
	
	I0603 14:26:32.663533   11176 fix.go:236] clock set: Mon Jun  3 14:26:27 UTC 2024
	 (err=<nil>)
	I0603 14:26:32.663533   11176 start.go:83] releasing machines lock for "multinode-720500", held for 2m15.6948854s
	I0603 14:26:32.663839   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:34.789217   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:34.789651   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:34.789848   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:37.370567   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:37.370627   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:37.377386   11176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:26:37.377416   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:37.385948   11176 ssh_runner.go:195] Run: cat /version.json
	I0603 14:26:37.385948   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:26:39.650657   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:39.650657   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:39.650657   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:39.662393   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:26:39.662393   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:39.662393   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:26:42.326706   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:42.326906   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:42.327121   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:26:42.359784   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:26:42.359842   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:26:42.359842   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:26:42.490215   11176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:26:42.490365   11176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1129068s)
	I0603 14:26:42.490365   11176 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 14:26:42.490490   11176 ssh_runner.go:235] Completed: cat /version.json: (5.1045007s)
	I0603 14:26:42.502619   11176 ssh_runner.go:195] Run: systemctl --version
	I0603 14:26:42.511560   11176 command_runner.go:130] > systemd 252 (252)
	I0603 14:26:42.511560   11176 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 14:26:42.524599   11176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:26:42.531998   11176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 14:26:42.532702   11176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:26:42.544322   11176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:26:42.572551   11176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:26:42.572551   11176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:26:42.572551   11176 start.go:494] detecting cgroup driver to use...
	I0603 14:26:42.572551   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:26:42.608615   11176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:26:42.621850   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:26:42.651569   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:26:42.669612   11176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:26:42.680616   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:26:42.709817   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:26:42.738670   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:26:42.770239   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:26:42.800691   11176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:26:42.829944   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:26:42.861195   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:26:42.893194   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:26:42.927813   11176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:26:42.949928   11176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:26:42.961399   11176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:26:42.992412   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:26:43.201958   11176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:26:43.233885   11176 start.go:494] detecting cgroup driver to use...
	I0603 14:26:43.248668   11176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:26:43.274293   11176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:26:43.274293   11176 command_runner.go:130] > [Unit]
	I0603 14:26:43.274293   11176 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:26:43.274293   11176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:26:43.274293   11176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:26:43.274293   11176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:26:43.274293   11176 command_runner.go:130] > StartLimitBurst=3
	I0603 14:26:43.274293   11176 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:26:43.274293   11176 command_runner.go:130] > [Service]
	I0603 14:26:43.274293   11176 command_runner.go:130] > Type=notify
	I0603 14:26:43.274726   11176 command_runner.go:130] > Restart=on-failure
	I0603 14:26:43.274726   11176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:26:43.274775   11176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:26:43.274775   11176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:26:43.274808   11176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:26:43.274808   11176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:26:43.274808   11176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:26:43.274808   11176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:26:43.274808   11176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:26:43.274808   11176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:26:43.274808   11176 command_runner.go:130] > ExecStart=
	I0603 14:26:43.274808   11176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:26:43.274808   11176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:26:43.274808   11176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:26:43.274808   11176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:26:43.274808   11176 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:26:43.274808   11176 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:26:43.274808   11176 command_runner.go:130] > LimitCORE=infinity
	I0603 14:26:43.274808   11176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:26:43.274808   11176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:26:43.274808   11176 command_runner.go:130] > TasksMax=infinity
	I0603 14:26:43.274808   11176 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:26:43.274808   11176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:26:43.274808   11176 command_runner.go:130] > Delegate=yes
	I0603 14:26:43.274808   11176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:26:43.274808   11176 command_runner.go:130] > KillMode=process
	I0603 14:26:43.274808   11176 command_runner.go:130] > [Install]
	I0603 14:26:43.274808   11176 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:26:43.286861   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:26:43.319799   11176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:26:43.364791   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:26:43.399891   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:26:43.433786   11176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:26:43.504508   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:26:43.530940   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:26:43.567941   11176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:26:43.584668   11176 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:26:43.590768   11176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:26:43.607041   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:26:43.625231   11176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:26:43.671929   11176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:26:43.874387   11176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:26:44.064874   11176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:26:44.064874   11176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:26:44.114185   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:26:44.343033   11176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:26:46.873990   11176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5309368s)
	I0603 14:26:46.886117   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:26:46.920060   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:26:46.954230   11176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:26:47.170380   11176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:26:47.388304   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:26:47.592773   11176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:26:47.641195   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:26:47.674423   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:26:47.879777   11176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:26:47.987405   11176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:26:47.998168   11176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:26:48.005725   11176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:26:48.005725   11176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:26:48.005725   11176 command_runner.go:130] > Device: 0,22	Inode: 890         Links: 1
	I0603 14:26:48.005725   11176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:26:48.005725   11176 command_runner.go:130] > Access: 2024-06-03 14:26:47.907902614 +0000
	I0603 14:26:48.006749   11176 command_runner.go:130] > Modify: 2024-06-03 14:26:47.907902614 +0000
	I0603 14:26:48.006749   11176 command_runner.go:130] > Change: 2024-06-03 14:26:47.911902620 +0000
	I0603 14:26:48.006791   11176 command_runner.go:130] >  Birth: -
	I0603 14:26:48.007003   11176 start.go:562] Will wait 60s for crictl version
	I0603 14:26:48.019720   11176 ssh_runner.go:195] Run: which crictl
	I0603 14:26:48.025402   11176 command_runner.go:130] > /usr/bin/crictl
	I0603 14:26:48.037309   11176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:26:48.094856   11176 command_runner.go:130] > Version:  0.1.0
	I0603 14:26:48.094856   11176 command_runner.go:130] > RuntimeName:  docker
	I0603 14:26:48.094856   11176 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:26:48.094856   11176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:26:48.094856   11176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:26:48.103857   11176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:26:48.137663   11176 command_runner.go:130] > 26.0.2
	I0603 14:26:48.147813   11176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:26:48.179126   11176 command_runner.go:130] > 26.0.2
	I0603 14:26:48.183754   11176 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:26:48.183754   11176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:26:48.188953   11176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:26:48.188953   11176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:26:48.188953   11176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:26:48.188953   11176 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:26:48.192364   11176 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:26:48.192440   11176 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:26:48.203715   11176 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:26:48.213420   11176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:26:48.240854   11176 kubeadm.go:877] updating cluster {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 14:26:48.241022   11176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:26:48.250431   11176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:26:48.270110   11176 docker.go:685] Got preloaded images: 
	I0603 14:26:48.270110   11176 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0603 14:26:48.282105   11176 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 14:26:48.300105   11176 command_runner.go:139] > {"Repositories":{}}
	I0603 14:26:48.312163   11176 ssh_runner.go:195] Run: which lz4
	I0603 14:26:48.320368   11176 command_runner.go:130] > /usr/bin/lz4
	I0603 14:26:48.320895   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 14:26:48.332935   11176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 14:26:48.339521   11176 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 14:26:48.339964   11176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 14:26:48.340123   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0603 14:26:50.263676   11176 docker.go:649] duration metric: took 1.9424821s to copy over tarball
	I0603 14:26:50.275126   11176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 14:26:58.771894   11176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4966989s)
	I0603 14:26:58.771894   11176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 14:26:58.841711   11176 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0603 14:26:58.861440   11176 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0603 14:26:58.861440   11176 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0603 14:26:58.907445   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:26:59.111434   11176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:27:02.099047   11176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9865235s)
	I0603 14:27:02.110362   11176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 14:27:02.135718   11176 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 14:27:02.135718   11176 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:27:02.137043   11176 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0603 14:27:02.137137   11176 cache_images.go:84] Images are preloaded, skipping loading
	I0603 14:27:02.137195   11176 kubeadm.go:928] updating node { 172.22.150.195 8443 v1.30.1 docker true true} ...
	I0603 14:27:02.137398   11176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.150.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:27:02.147339   11176 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 14:27:02.180287   11176 command_runner.go:130] > cgroupfs
	I0603 14:27:02.181419   11176 cni.go:84] Creating CNI manager for ""
	I0603 14:27:02.181458   11176 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 14:27:02.181458   11176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 14:27:02.181542   11176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.150.195 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-720500 NodeName:multinode-720500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.150.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.150.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 14:27:02.181846   11176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.150.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-720500"
	  kubeletExtraArgs:
	    node-ip: 172.22.150.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.150.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 14:27:02.194674   11176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:27:02.212976   11176 command_runner.go:130] > kubeadm
	I0603 14:27:02.212976   11176 command_runner.go:130] > kubectl
	I0603 14:27:02.212976   11176 command_runner.go:130] > kubelet
	I0603 14:27:02.212976   11176 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:27:02.226640   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 14:27:02.248313   11176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 14:27:02.283367   11176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:27:02.316655   11176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0603 14:27:02.364729   11176 ssh_runner.go:195] Run: grep 172.22.150.195	control-plane.minikube.internal$ /etc/hosts
	I0603 14:27:02.370278   11176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.150.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:27:02.405483   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:27:02.612191   11176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:27:02.642265   11176 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.150.195
	I0603 14:27:02.643300   11176 certs.go:194] generating shared ca certs ...
	I0603 14:27:02.643352   11176 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:02.644181   11176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:27:02.644410   11176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:27:02.644410   11176 certs.go:256] generating profile certs ...
	I0603 14:27:02.645037   11176 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.key
	I0603 14:27:02.645736   11176 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.crt with IP's: []
	I0603 14:27:02.817117   11176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.crt ...
	I0603 14:27:02.817117   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.crt: {Name:mk5d1451846a5e40aaf52baf6991684c6479d30d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:02.819033   11176 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.key ...
	I0603 14:27:02.819033   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.key: {Name:mk2b830830c46c2caa88ace91bb7bac522b37d7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:02.820313   11176 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.c59936c8
	I0603 14:27:02.820622   11176 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.c59936c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.150.195]
	I0603 14:27:02.977678   11176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.c59936c8 ...
	I0603 14:27:02.977678   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.c59936c8: {Name:mkbe21f4736d1be52cf3734925cf832c96b2fa67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:02.979660   11176 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.c59936c8 ...
	I0603 14:27:02.979660   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.c59936c8: {Name:mk73d69004ff08837968bf2408e80323c04c058d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:02.980420   11176 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.c59936c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt
	I0603 14:27:02.991935   11176 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.c59936c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key
	I0603 14:27:02.992538   11176 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key
	I0603 14:27:02.992538   11176 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt with IP's: []
	I0603 14:27:03.177598   11176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt ...
	I0603 14:27:03.177598   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt: {Name:mkf2d5dbb74c4566409fc8e17974a317f5f800cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:03.179632   11176 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key ...
	I0603 14:27:03.179632   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key: {Name:mka1b5c1282d7351248ec0497da0be5d3b35b3c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:03.180283   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:27:03.181359   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:27:03.181504   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:27:03.181756   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:27:03.181929   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 14:27:03.182088   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 14:27:03.182229   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 14:27:03.190430   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 14:27:03.191109   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:27:03.191954   11176 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:27:03.192136   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:27:03.192241   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:27:03.192241   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:27:03.192241   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:27:03.193021   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:27:03.193021   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:27:03.193461   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:27:03.193461   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:27:03.194226   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:27:03.240901   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:27:03.287356   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:27:03.331012   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:27:03.377555   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 14:27:03.421505   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 14:27:03.463795   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 14:27:03.500970   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 14:27:03.545865   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:27:03.594851   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:27:03.641663   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:27:03.684772   11176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 14:27:03.729163   11176 ssh_runner.go:195] Run: openssl version
	I0603 14:27:03.737185   11176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:27:03.748749   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:27:03.780543   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:27:03.787331   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:27:03.787331   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:27:03.799142   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:27:03.807956   11176 command_runner.go:130] > b5213941
	I0603 14:27:03.824062   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:27:03.855862   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:27:03.884415   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:27:03.891776   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:27:03.891776   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:27:03.904137   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:27:03.912427   11176 command_runner.go:130] > 51391683
	I0603 14:27:03.924689   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:27:03.953108   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:27:03.985547   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:27:03.992943   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:27:03.992943   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:27:04.004837   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:27:04.012674   11176 command_runner.go:130] > 3ec20f2e
	I0603 14:27:04.025607   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:27:04.056321   11176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:27:04.062855   11176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:27:04.063315   11176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:27:04.063975   11176 kubeadm.go:391] StartCluster: {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:27:04.073419   11176 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 14:27:04.109007   11176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 14:27:04.129991   11176 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0603 14:27:04.129991   11176 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0603 14:27:04.129991   11176 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0603 14:27:04.142757   11176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 14:27:04.174332   11176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 14:27:04.188905   11176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 14:27:04.188937   11176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 14:27:04.188937   11176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 14:27:04.188937   11176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:27:04.188937   11176 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:27:04.188937   11176 kubeadm.go:156] found existing configuration files:
	
	I0603 14:27:04.201497   11176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 14:27:04.217564   11176 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:27:04.217739   11176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:27:04.229674   11176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 14:27:04.261674   11176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 14:27:04.278418   11176 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:27:04.278418   11176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:27:04.290501   11176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 14:27:04.321984   11176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 14:27:04.337899   11176 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:27:04.338952   11176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:27:04.350603   11176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 14:27:04.383565   11176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 14:27:04.399803   11176 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:27:04.399803   11176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:27:04.412866   11176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 14:27:04.430598   11176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 14:27:04.811187   11176 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 14:27:04.811187   11176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 14:27:18.943311   11176 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 14:27:18.943311   11176 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0603 14:27:18.943619   11176 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 14:27:18.943619   11176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 14:27:18.943836   11176 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 14:27:18.943836   11176 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 14:27:18.943836   11176 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 14:27:18.943836   11176 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 14:27:18.943836   11176 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 14:27:18.944374   11176 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 14:27:18.944549   11176 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:27:18.944616   11176 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:27:18.948522   11176 out.go:204]   - Generating certificates and keys ...
	I0603 14:27:18.948839   11176 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 14:27:18.948922   11176 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 14:27:18.949041   11176 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 14:27:18.949041   11176 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 14:27:18.949041   11176 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 14:27:18.949041   11176 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 14:27:18.949041   11176 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0603 14:27:18.949041   11176 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 14:27:18.949579   11176 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 14:27:18.949677   11176 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0603 14:27:18.949737   11176 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0603 14:27:18.949737   11176 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 14:27:18.949737   11176 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 14:27:18.949737   11176 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0603 14:27:18.949737   11176 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-720500] and IPs [172.22.150.195 127.0.0.1 ::1]
	I0603 14:27:18.950285   11176 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-720500] and IPs [172.22.150.195 127.0.0.1 ::1]
	I0603 14:27:18.950460   11176 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0603 14:27:18.950460   11176 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 14:27:18.950530   11176 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-720500] and IPs [172.22.150.195 127.0.0.1 ::1]
	I0603 14:27:18.950530   11176 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-720500] and IPs [172.22.150.195 127.0.0.1 ::1]
	I0603 14:27:18.950530   11176 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 14:27:18.950530   11176 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 14:27:18.950530   11176 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 14:27:18.950530   11176 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 14:27:18.950530   11176 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 14:27:18.950530   11176 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:27:18.951332   11176 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:27:18.951332   11176 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:27:18.952328   11176 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:27:18.952328   11176 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:27:18.952328   11176 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:27:18.952328   11176 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:27:18.956475   11176 out.go:204]   - Booting up control plane ...
	I0603 14:27:18.956475   11176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:27:18.956475   11176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:27:18.956475   11176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:27:18.956475   11176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:27:18.957325   11176 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:27:18.957325   11176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:27:18.957325   11176 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:27:18.957325   11176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:27:18.957325   11176 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:27:18.957325   11176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:27:18.957325   11176 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 14:27:18.957325   11176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 14:27:18.957325   11176 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 14:27:18.958358   11176 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 14:27:18.958358   11176 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 14:27:18.958358   11176 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 14:27:18.958358   11176 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002271526s
	I0603 14:27:18.958358   11176 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002271526s
	I0603 14:27:18.958358   11176 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 14:27:18.958358   11176 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 14:27:18.958358   11176 kubeadm.go:309] [api-check] The API server is healthy after 7.003085313s
	I0603 14:27:18.958358   11176 command_runner.go:130] > [api-check] The API server is healthy after 7.003085313s
	I0603 14:27:18.958358   11176 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 14:27:18.958358   11176 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 14:27:18.959329   11176 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 14:27:18.959329   11176 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 14:27:18.959329   11176 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0603 14:27:18.959329   11176 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 14:27:18.959329   11176 kubeadm.go:309] [mark-control-plane] Marking the node multinode-720500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 14:27:18.959329   11176 command_runner.go:130] > [mark-control-plane] Marking the node multinode-720500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 14:27:18.959329   11176 command_runner.go:130] > [bootstrap-token] Using token: h33w4t.3dwfq2tcosnype1n
	I0603 14:27:18.959329   11176 kubeadm.go:309] [bootstrap-token] Using token: h33w4t.3dwfq2tcosnype1n
	I0603 14:27:18.964334   11176 out.go:204]   - Configuring RBAC rules ...
	I0603 14:27:18.964334   11176 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 14:27:18.964334   11176 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 14:27:18.964334   11176 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 14:27:18.964334   11176 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 14:27:18.965322   11176 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 14:27:18.965322   11176 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 14:27:18.965322   11176 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 14:27:18.965322   11176 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 14:27:18.965322   11176 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 14:27:18.965322   11176 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 14:27:18.966330   11176 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 14:27:18.966330   11176 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 14:27:18.966330   11176 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 14:27:18.966330   11176 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 14:27:18.966330   11176 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 14:27:18.966330   11176 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 14:27:18.966330   11176 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 14:27:18.966330   11176 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 14:27:18.966330   11176 kubeadm.go:309] 
	I0603 14:27:18.966330   11176 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 14:27:18.966330   11176 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0603 14:27:18.966330   11176 kubeadm.go:309] 
	I0603 14:27:18.966330   11176 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0603 14:27:18.966330   11176 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 14:27:18.967393   11176 kubeadm.go:309] 
	I0603 14:27:18.967393   11176 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0603 14:27:18.967393   11176 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 14:27:18.967393   11176 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 14:27:18.967393   11176 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 14:27:18.967393   11176 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 14:27:18.967393   11176 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 14:27:18.967393   11176 kubeadm.go:309] 
	I0603 14:27:18.967393   11176 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 14:27:18.967393   11176 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0603 14:27:18.967393   11176 kubeadm.go:309] 
	I0603 14:27:18.967393   11176 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 14:27:18.967393   11176 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 14:27:18.967393   11176 kubeadm.go:309] 
	I0603 14:27:18.967393   11176 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 14:27:18.967393   11176 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0603 14:27:18.968326   11176 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 14:27:18.968326   11176 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 14:27:18.968326   11176 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 14:27:18.968326   11176 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 14:27:18.968326   11176 kubeadm.go:309] 
	I0603 14:27:18.968326   11176 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 14:27:18.968326   11176 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0603 14:27:18.968326   11176 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 14:27:18.968326   11176 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0603 14:27:18.968326   11176 kubeadm.go:309] 
	I0603 14:27:18.969327   11176 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token h33w4t.3dwfq2tcosnype1n \
	I0603 14:27:18.969327   11176 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token h33w4t.3dwfq2tcosnype1n \
	I0603 14:27:18.969327   11176 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f \
	I0603 14:27:18.969327   11176 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f \
	I0603 14:27:18.969327   11176 kubeadm.go:309] 	--control-plane 
	I0603 14:27:18.969327   11176 command_runner.go:130] > 	--control-plane 
	I0603 14:27:18.969327   11176 kubeadm.go:309] 
	I0603 14:27:18.969327   11176 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0603 14:27:18.969327   11176 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 14:27:18.969327   11176 kubeadm.go:309] 
	I0603 14:27:18.969327   11176 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token h33w4t.3dwfq2tcosnype1n \
	I0603 14:27:18.969327   11176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h33w4t.3dwfq2tcosnype1n \
	I0603 14:27:18.970316   11176 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 14:27:18.970316   11176 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 14:27:18.970316   11176 cni.go:84] Creating CNI manager for ""
	I0603 14:27:18.970316   11176 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 14:27:18.973319   11176 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 14:27:18.989329   11176 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 14:27:18.998671   11176 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 14:27:18.998704   11176 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 14:27:18.998704   11176 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 14:27:18.998762   11176 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 14:27:18.998762   11176 command_runner.go:130] > Access: 2024-06-03 14:25:23.285197100 +0000
	I0603 14:27:18.998762   11176 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 14:27:18.998762   11176 command_runner.go:130] > Change: 2024-06-03 14:25:14.563000000 +0000
	I0603 14:27:18.998801   11176 command_runner.go:130] >  Birth: -
	I0603 14:27:18.998830   11176 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 14:27:18.998830   11176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 14:27:19.049786   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 14:27:19.490794   11176 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0603 14:27:19.490794   11176 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0603 14:27:19.490794   11176 command_runner.go:130] > serviceaccount/kindnet created
	I0603 14:27:19.490794   11176 command_runner.go:130] > daemonset.apps/kindnet created
	I0603 14:27:19.490794   11176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 14:27:19.504768   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:19.504768   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-720500 minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=multinode-720500 minikube.k8s.io/primary=true
	I0603 14:27:19.511774   11176 command_runner.go:130] > -16
	I0603 14:27:19.511774   11176 ops.go:34] apiserver oom_adj: -16
	I0603 14:27:19.792276   11176 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0603 14:27:19.802149   11176 command_runner.go:130] > node/multinode-720500 labeled
	I0603 14:27:19.805179   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:19.918171   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:20.315139   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:20.424912   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:20.812180   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:20.916991   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:21.310671   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:21.416354   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:21.810805   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:21.919263   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:22.315975   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:22.422309   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:22.818377   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:22.923374   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:23.305061   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:23.408920   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:23.807362   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:23.919570   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:24.311106   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:24.423047   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:24.814791   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:24.921978   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:25.309413   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:25.414638   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:25.813620   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:25.932684   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:26.315112   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:26.416950   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:26.816926   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:26.920593   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:27.306129   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:27.416594   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:27.807333   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:27.919209   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:28.311995   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:28.438621   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:28.819116   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:28.938025   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:29.308738   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:29.428809   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:29.812174   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:29.929136   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:30.320557   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:30.429932   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:30.807327   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:30.920952   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:31.314812   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:31.423770   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:31.806271   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:31.914854   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:32.310389   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:32.482193   11176 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0603 14:27:32.819333   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 14:27:32.978518   11176 command_runner.go:130] > NAME      SECRETS   AGE
	I0603 14:27:32.978518   11176 command_runner.go:130] > default   0         0s
	I0603 14:27:32.978614   11176 kubeadm.go:1107] duration metric: took 13.4877094s to wait for elevateKubeSystemPrivileges
	W0603 14:27:32.978719   11176 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 14:27:32.978743   11176 kubeadm.go:393] duration metric: took 28.9146581s to StartCluster
	I0603 14:27:32.978856   11176 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:32.979049   11176 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:27:32.981401   11176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:27:32.981948   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 14:27:32.981948   11176 start.go:234] Will wait 6m0s for node &{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 14:27:32.982955   11176 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 14:27:32.982955   11176 addons.go:69] Setting storage-provisioner=true in profile "multinode-720500"
	I0603 14:27:32.990947   11176 addons.go:234] Setting addon storage-provisioner=true in "multinode-720500"
	I0603 14:27:32.982955   11176 addons.go:69] Setting default-storageclass=true in profile "multinode-720500"
	I0603 14:27:32.982955   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:27:32.990947   11176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-720500"
	I0603 14:27:32.990947   11176 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:27:32.990947   11176 out.go:177] * Verifying Kubernetes components...
	I0603 14:27:32.991927   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:27:32.991927   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:27:33.011951   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:27:33.285261   11176 command_runner.go:130] > apiVersion: v1
	I0603 14:27:33.285261   11176 command_runner.go:130] > data:
	I0603 14:27:33.285261   11176 command_runner.go:130] >   Corefile: |
	I0603 14:27:33.285261   11176 command_runner.go:130] >     .:53 {
	I0603 14:27:33.285261   11176 command_runner.go:130] >         errors
	I0603 14:27:33.285261   11176 command_runner.go:130] >         health {
	I0603 14:27:33.285261   11176 command_runner.go:130] >            lameduck 5s
	I0603 14:27:33.285261   11176 command_runner.go:130] >         }
	I0603 14:27:33.285261   11176 command_runner.go:130] >         ready
	I0603 14:27:33.285261   11176 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0603 14:27:33.285261   11176 command_runner.go:130] >            pods insecure
	I0603 14:27:33.285261   11176 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0603 14:27:33.285261   11176 command_runner.go:130] >            ttl 30
	I0603 14:27:33.285261   11176 command_runner.go:130] >         }
	I0603 14:27:33.285261   11176 command_runner.go:130] >         prometheus :9153
	I0603 14:27:33.285261   11176 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0603 14:27:33.285261   11176 command_runner.go:130] >            max_concurrent 1000
	I0603 14:27:33.285261   11176 command_runner.go:130] >         }
	I0603 14:27:33.285261   11176 command_runner.go:130] >         cache 30
	I0603 14:27:33.285261   11176 command_runner.go:130] >         loop
	I0603 14:27:33.285261   11176 command_runner.go:130] >         reload
	I0603 14:27:33.285261   11176 command_runner.go:130] >         loadbalance
	I0603 14:27:33.285261   11176 command_runner.go:130] >     }
	I0603 14:27:33.285261   11176 command_runner.go:130] > kind: ConfigMap
	I0603 14:27:33.285261   11176 command_runner.go:130] > metadata:
	I0603 14:27:33.285261   11176 command_runner.go:130] >   creationTimestamp: "2024-06-03T14:27:18Z"
	I0603 14:27:33.285261   11176 command_runner.go:130] >   name: coredns
	I0603 14:27:33.285261   11176 command_runner.go:130] >   namespace: kube-system
	I0603 14:27:33.285261   11176 command_runner.go:130] >   resourceVersion: "262"
	I0603 14:27:33.285261   11176 command_runner.go:130] >   uid: 06e6ecc6-01b8-4b78-a869-ba6a9b459833
	I0603 14:27:33.285910   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.22.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 14:27:33.453568   11176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:27:34.012777   11176 command_runner.go:130] > configmap/coredns replaced
	I0603 14:27:34.012923   11176 start.go:946] {"host.minikube.internal": 172.22.144.1} host record injected into CoreDNS's ConfigMap
	I0603 14:27:34.013511   11176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:27:34.013511   11176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:27:34.014590   11176 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.150.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:27:34.014590   11176 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.150.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:27:34.016301   11176 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 14:27:34.016682   11176 node_ready.go:35] waiting up to 6m0s for node "multinode-720500" to be "Ready" ...
	I0603 14:27:34.016682   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:34.016682   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:34.016682   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:34.016682   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:34.016682   11176 round_trippers.go:463] GET https://172.22.150.195:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 14:27:34.016682   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:34.016682   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:34.016682   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:34.040328   11176 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0603 14:27:34.040328   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:34.040328   11176 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0603 14:27:34.040328   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:34.040328   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:34.040328   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:34 GMT
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Content-Length: 291
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:34 GMT
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Audit-Id: 9195e307-31dc-4a59-ad21-8727f98ba0c0
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Audit-Id: 5b8a91fb-5e2f-40c1-bcfb-849792c242c7
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:34.040328   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:34.040328   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:34.040328   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:34.040328   11176 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fa149e27-d305-43eb-954f-fa5d446a8241","resourceVersion":"387","creationTimestamp":"2024-06-03T14:27:18Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 14:27:34.040328   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:34.041276   11176 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fa149e27-d305-43eb-954f-fa5d446a8241","resourceVersion":"387","creationTimestamp":"2024-06-03T14:27:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 14:27:34.041276   11176 round_trippers.go:463] PUT https://172.22.150.195:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 14:27:34.041276   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:34.041276   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:34.041276   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:34.041276   11176 round_trippers.go:473]     Content-Type: application/json
	I0603 14:27:34.057428   11176 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0603 14:27:34.057564   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:34.057564   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:34.057655   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:34.057655   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:34.057655   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:34.057655   11176 round_trippers.go:580]     Content-Length: 291
	I0603 14:27:34.057655   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:34 GMT
	I0603 14:27:34.057655   11176 round_trippers.go:580]     Audit-Id: 6a2a3742-c219-4a74-95d2-1cfe4c2eb885
	I0603 14:27:34.057762   11176 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fa149e27-d305-43eb-954f-fa5d446a8241","resourceVersion":"391","creationTimestamp":"2024-06-03T14:27:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0603 14:27:34.517370   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:34.517370   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:34.517370   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:34.517370   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:34.517370   11176 round_trippers.go:463] GET https://172.22.150.195:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0603 14:27:34.517370   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:34.517370   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:34.517370   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:34.521343   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:34.521343   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:34.521343   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:34.521343   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:34.521343   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:34.521343   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:34 GMT
	I0603 14:27:34.521343   11176 round_trippers.go:580]     Audit-Id: 3328624f-9acf-4299-81c7-0d163653c34a
	I0603 14:27:34.521343   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:34.521343   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:34.522335   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:34.522335   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:34.522335   11176 round_trippers.go:580]     Audit-Id: 2d014562-fb6b-4ba8-bc8c-7404d73c200d
	I0603 14:27:34.522335   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:34.522335   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:34.522335   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:34.522335   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:34.522335   11176 round_trippers.go:580]     Content-Length: 291
	I0603 14:27:34.522335   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:34 GMT
	I0603 14:27:34.522335   11176 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fa149e27-d305-43eb-954f-fa5d446a8241","resourceVersion":"403","creationTimestamp":"2024-06-03T14:27:18Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0603 14:27:34.522335   11176 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-720500" context rescaled to 1 replicas
	I0603 14:27:35.026082   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:35.026169   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:35.026169   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:35.026169   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:35.030820   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:35.031838   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:35.031887   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:35 GMT
	I0603 14:27:35.031887   11176 round_trippers.go:580]     Audit-Id: 33bfed67-12fd-4957-97fa-097d61565340
	I0603 14:27:35.031887   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:35.031887   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:35.031887   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:35.031887   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:35.032087   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:35.382156   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:27:35.382156   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:35.385640   11176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:27:35.384999   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:27:35.387564   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:35.388303   11176 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 14:27:35.388303   11176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 14:27:35.388303   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:27:35.388981   11176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:27:35.389599   11176 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.150.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:27:35.390335   11176 addons.go:234] Setting addon default-storageclass=true in "multinode-720500"
	I0603 14:27:35.390389   11176 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:27:35.391426   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:27:35.532865   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:35.532865   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:35.532865   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:35.532865   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:35.536769   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:35.537231   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:35.537231   11176 round_trippers.go:580]     Audit-Id: 47cf30c0-ffc8-49fa-bc76-ec1d29ab306f
	I0603 14:27:35.537231   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:35.537231   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:35.537231   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:35.537231   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:35.537231   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:35 GMT
	I0603 14:27:35.538509   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:36.063641   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:36.063641   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:36.063641   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:36.063641   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:36.067617   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:36.068420   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:36.068420   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:36.068420   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:36.068420   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:36 GMT
	I0603 14:27:36.068420   11176 round_trippers.go:580]     Audit-Id: 85829101-9baf-4d1f-ab62-4bf55cfd7e13
	I0603 14:27:36.068563   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:36.068563   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:36.068615   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:36.069661   11176 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:27:36.528892   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:36.528892   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:36.528892   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:36.528892   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:36.532960   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:36.533120   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:36.533120   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:36.533120   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:36.533120   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:36 GMT
	I0603 14:27:36.533120   11176 round_trippers.go:580]     Audit-Id: cf575ec1-7796-45d0-9267-109ec8e0c0c8
	I0603 14:27:36.533120   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:36.533120   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:36.533120   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:37.020997   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:37.021102   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:37.021226   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:37.021226   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:37.024440   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:37.025433   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:37.025433   11176 round_trippers.go:580]     Audit-Id: 0c7578cd-fb61-461c-a0f3-17abd97fa79f
	I0603 14:27:37.025433   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:37.025433   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:37.025433   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:37.025433   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:37.025433   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:37 GMT
	I0603 14:27:37.026246   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:37.529429   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:37.529512   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:37.529512   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:37.529512   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:37.532329   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:37.532329   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:37.532329   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:37.532329   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:37.532329   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:37.532329   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:37 GMT
	I0603 14:27:37.532329   11176 round_trippers.go:580]     Audit-Id: 9ba4a5b5-05b9-4dc2-8ab0-4290c3a9e7b0
	I0603 14:27:37.532329   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:37.533303   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:37.755180   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:27:37.755544   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:37.755615   11176 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 14:27:37.755615   11176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 14:27:37.755615   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:27:37.830430   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:27:37.830509   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:37.830575   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:27:38.019648   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:38.019739   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:38.019739   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:38.019739   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:38.023048   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:38.023442   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:38.023442   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:38.023442   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:38.023442   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:38.023442   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:38 GMT
	I0603 14:27:38.023442   11176 round_trippers.go:580]     Audit-Id: a5116212-1fb2-4aea-9016-8fd6e85f0ec6
	I0603 14:27:38.023442   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:38.023794   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:38.526090   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:38.526090   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:38.526090   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:38.526090   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:38.530454   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:38.530454   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:38.530454   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:38 GMT
	I0603 14:27:38.530454   11176 round_trippers.go:580]     Audit-Id: 8d00afbc-60d0-4969-8eda-bd07934fb402
	I0603 14:27:38.530454   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:38.530454   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:38.530454   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:38.530454   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:38.530744   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:38.531013   11176 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:27:39.019705   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:39.019705   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:39.019829   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:39.019829   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:39.023609   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:39.024021   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:39.024021   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:39.024021   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:39.024021   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:39.024021   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:39 GMT
	I0603 14:27:39.024138   11176 round_trippers.go:580]     Audit-Id: 6df9901e-0321-4a18-8c49-740f3112f419
	I0603 14:27:39.024138   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:39.024404   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:39.531320   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:39.531408   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:39.531408   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:39.531408   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:39.535373   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:39.535645   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:39.535720   11176 round_trippers.go:580]     Audit-Id: e6374bcc-9233-4b34-b299-0c5620d63bb7
	I0603 14:27:39.535720   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:39.535720   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:39.535720   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:39.535720   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:39.535720   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:39 GMT
	I0603 14:27:39.536007   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:40.021623   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:40.021691   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:40.021691   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:40.021691   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:40.097309   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:27:40.097309   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:40.098360   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:27:40.215288   11176 round_trippers.go:574] Response Status: 200 OK in 193 milliseconds
	I0603 14:27:40.215288   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:40.215288   11176 round_trippers.go:580]     Audit-Id: 09190e8e-6c53-4687-9749-960728974fa7
	I0603 14:27:40.215288   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:40.216253   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:40.216253   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:40.216253   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:40.216253   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:40 GMT
	I0603 14:27:40.216552   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:40.528382   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:40.528485   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:40.528485   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:40.528485   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:40.532684   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:40.532766   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:40.532766   11176 round_trippers.go:580]     Audit-Id: 5f6234ed-440e-463f-bcc0-ea5cdf3a498f
	I0603 14:27:40.532766   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:40.532766   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:40.532766   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:40.532766   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:40.532864   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:40 GMT
	I0603 14:27:40.533214   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:40.533982   11176 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:27:40.586942   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:27:40.586942   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:40.587136   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:27:40.747994   11176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 14:27:41.032511   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:41.032511   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:41.032511   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:41.032511   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:41.035621   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:41.035621   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:41.035621   11176 round_trippers.go:580]     Audit-Id: 0dfdae68-f7c8-4d49-bf14-837c0c4a98d6
	I0603 14:27:41.035621   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:41.035621   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:41.036623   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:41.036623   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:41.036623   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:41 GMT
	I0603 14:27:41.036873   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:41.324529   11176 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0603 14:27:41.324652   11176 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0603 14:27:41.324652   11176 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0603 14:27:41.324652   11176 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0603 14:27:41.324749   11176 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0603 14:27:41.324749   11176 command_runner.go:130] > pod/storage-provisioner created
	I0603 14:27:41.522129   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:41.522129   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:41.522129   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:41.522129   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:41.524713   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:41.525590   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:41.525590   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:41 GMT
	I0603 14:27:41.525590   11176 round_trippers.go:580]     Audit-Id: e3c826fb-a5f4-4e76-ad5c-9f01db9a8c56
	I0603 14:27:41.525590   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:41.525590   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:41.525590   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:41.525590   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:41.525810   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:42.030672   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:42.030916   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:42.030916   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:42.030916   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:42.035155   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:42.035155   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:42.035256   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:42 GMT
	I0603 14:27:42.035256   11176 round_trippers.go:580]     Audit-Id: 35b2b1b3-6fe0-4f5a-bfca-ab548b7c35ec
	I0603 14:27:42.035256   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:42.035256   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:42.035256   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:42.035256   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:42.038678   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:42.522823   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:42.522942   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:42.522942   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:42.522942   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:42.526304   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:42.526304   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:42.526834   11176 round_trippers.go:580]     Audit-Id: 7c2658d1-d431-426c-8c99-b163acfacbe4
	I0603 14:27:42.526834   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:42.526834   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:42.526834   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:42.526834   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:42.526834   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:42 GMT
	I0603 14:27:42.526939   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"339","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0603 14:27:42.784525   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:27:42.785397   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:42.785635   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:27:42.934340   11176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 14:27:43.027264   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:43.027264   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.027264   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.027264   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.037272   11176 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:27:43.037718   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.037718   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.037718   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.037718   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.037718   11176 round_trippers.go:580]     Audit-Id: 493f1cf4-a703-4b67-946a-2ecda7f56b0d
	I0603 14:27:43.037718   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.037803   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.041133   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"425","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4856 chars]
	I0603 14:27:43.041268   11176 node_ready.go:49] node "multinode-720500" has status "Ready":"True"
	I0603 14:27:43.041268   11176 node_ready.go:38] duration metric: took 9.0245126s for node "multinode-720500" to be "Ready" ...
	I0603 14:27:43.041268   11176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:27:43.041907   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:27:43.041970   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.041996   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.041996   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.053340   11176 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 14:27:43.053340   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.053340   11176 round_trippers.go:580]     Audit-Id: 1ca70f30-351e-4b1d-84a4-f57ce3d4635a
	I0603 14:27:43.053340   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.053340   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.053340   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.053340   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.053340   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.055299   11176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"385","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53188 chars]
	I0603 14:27:43.059302   11176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:43.059302   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:43.059302   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.059302   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.059302   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.064282   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:43.064282   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.065173   11176 round_trippers.go:580]     Audit-Id: ae7837ae-1452-49b0-80ed-34f6211f4d15
	I0603 14:27:43.065173   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.065173   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.065173   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.065173   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.065173   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.065469   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"385","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4942 chars]
	I0603 14:27:43.150971   11176 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0603 14:27:43.151302   11176 round_trippers.go:463] GET https://172.22.150.195:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 14:27:43.151302   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.151302   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.151302   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.154924   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:43.155842   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.155984   11176 round_trippers.go:580]     Audit-Id: d41c093a-ff04-4082-b578-8ee95a708aa9
	I0603 14:27:43.156043   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.156043   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.156043   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.156043   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.156043   11176 round_trippers.go:580]     Content-Length: 1273
	I0603 14:27:43.156119   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.156204   11176 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"standard","uid":"224e32cc-989b-48d9-b801-20027f71bb8c","resourceVersion":"431","creationTimestamp":"2024-06-03T14:27:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T14:27:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0603 14:27:43.156486   11176 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"224e32cc-989b-48d9-b801-20027f71bb8c","resourceVersion":"431","creationTimestamp":"2024-06-03T14:27:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T14:27:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 14:27:43.157100   11176 round_trippers.go:463] PUT https://172.22.150.195:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 14:27:43.157156   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.157243   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.157243   11176 round_trippers.go:473]     Content-Type: application/json
	I0603 14:27:43.157243   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.161167   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:43.161167   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.161501   11176 round_trippers.go:580]     Audit-Id: 5134d3e8-08b4-4417-af03-a6278f69c42f
	I0603 14:27:43.161501   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.161501   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.161501   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.161501   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.161501   11176 round_trippers.go:580]     Content-Length: 1220
	I0603 14:27:43.161501   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.161658   11176 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"224e32cc-989b-48d9-b801-20027f71bb8c","resourceVersion":"431","creationTimestamp":"2024-06-03T14:27:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-03T14:27:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0603 14:27:43.167289   11176 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0603 14:27:43.170269   11176 addons.go:510] duration metric: took 10.1882371s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0603 14:27:43.565806   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:43.565806   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.565806   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.565806   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.569401   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:43.569401   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.569401   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.569800   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.569800   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.569800   11176 round_trippers.go:580]     Audit-Id: 0d10de76-b107-45fd-9129-901eeb5a8e25
	I0603 14:27:43.569800   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.569800   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.571498   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"432","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0603 14:27:43.572278   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:43.572356   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:43.572356   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:43.572356   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:43.576159   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:43.576159   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:43.576159   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:43.576159   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:43 GMT
	I0603 14:27:43.576159   11176 round_trippers.go:580]     Audit-Id: 6105b1e1-6908-4a3f-9f48-91942dad2ca8
	I0603 14:27:43.576159   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:43.576159   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:43.576159   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:43.576788   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:44.059732   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:44.059791   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:44.059791   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:44.059791   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:44.062819   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:44.062819   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:44.062819   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:44.062819   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:44.062819   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:44.063422   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:44 GMT
	I0603 14:27:44.063422   11176 round_trippers.go:580]     Audit-Id: b73e36be-d627-47da-a06c-464df5795b90
	I0603 14:27:44.063422   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:44.063625   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"432","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0603 14:27:44.064351   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:44.064351   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:44.064351   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:44.064351   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:44.067005   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:44.067005   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:44.067005   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:44.067005   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:44 GMT
	I0603 14:27:44.067005   11176 round_trippers.go:580]     Audit-Id: a027d87e-aedb-42ee-a0cb-5624e6f2af0a
	I0603 14:27:44.067005   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:44.067005   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:44.067150   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:44.067409   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:44.567815   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:44.568010   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:44.568010   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:44.568010   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:44.573922   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:44.574240   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:44.574240   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:44.574240   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:44.574240   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:44.574240   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:44.574240   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:44 GMT
	I0603 14:27:44.574240   11176 round_trippers.go:580]     Audit-Id: 2082aa88-2ecb-4696-92b7-43dc824dfbcf
	I0603 14:27:44.574550   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"432","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0603 14:27:44.575422   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:44.575422   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:44.575504   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:44.575504   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:44.580991   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:44.581084   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:44.581084   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:44.581124   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:44.581124   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:44.581124   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:44 GMT
	I0603 14:27:44.581124   11176 round_trippers.go:580]     Audit-Id: 54b2fa38-07d9-4916-a012-f25f8c786c8e
	I0603 14:27:44.581124   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:44.583153   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:45.069262   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:45.069398   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:45.069530   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:45.069530   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:45.073378   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:45.073378   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:45.073378   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:45.073872   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:45.073872   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:45 GMT
	I0603 14:27:45.073872   11176 round_trippers.go:580]     Audit-Id: 5f10b769-09fa-4f13-95eb-a80ff4b2e51d
	I0603 14:27:45.073872   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:45.073872   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:45.074455   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"443","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6809 chars]
	I0603 14:27:45.075254   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:45.075364   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:45.075364   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:45.075364   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:45.080390   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:45.080475   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:45.080475   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:45.080475   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:45.080475   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:45.080475   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:45 GMT
	I0603 14:27:45.080475   11176 round_trippers.go:580]     Audit-Id: f8bda6f3-eb81-4565-96c4-ffce86057a31
	I0603 14:27:45.080475   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:45.082705   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:45.082944   11176 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:27:45.568019   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:45.568098   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:45.568098   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:45.568098   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:45.574570   11176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:27:45.574570   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:45.574570   11176 round_trippers.go:580]     Audit-Id: 40578dbb-fc93-4326-83a8-ab85af8899b8
	I0603 14:27:45.574570   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:45.574570   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:45.574570   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:45.574570   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:45.574570   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:45 GMT
	I0603 14:27:45.575274   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"443","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6809 chars]
	I0603 14:27:45.576091   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:45.576122   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:45.576122   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:45.576122   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:45.579715   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:45.579715   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:45.579715   11176 round_trippers.go:580]     Audit-Id: 9858859a-9ace-4c20-bcf7-b643c8b2f1f7
	I0603 14:27:45.580291   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:45.580291   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:45.580291   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:45.580291   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:45.580291   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:45 GMT
	I0603 14:27:45.580710   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.070721   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:27:46.070951   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.070951   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.070951   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.074827   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:46.075463   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.075463   11176 round_trippers.go:580]     Audit-Id: 3497fb8f-ef6b-4551-98ae-860a2c99d16f
	I0603 14:27:46.075463   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.075463   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.075463   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.075463   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.075566   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.075764   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"447","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0603 14:27:46.076560   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.076560   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.076560   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.076630   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.082372   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:46.082586   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.082586   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.082586   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.082586   11176 round_trippers.go:580]     Audit-Id: c50d475b-04d2-4626-a926-2524e12dc114
	I0603 14:27:46.082586   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.082586   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.082586   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.082586   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.083477   11176 pod_ready.go:92] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.083477   11176 pod_ready.go:81] duration metric: took 3.0241503s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.083477   11176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.083477   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:27:46.083477   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.083477   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.083477   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.086060   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.086060   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.086060   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.086060   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.086060   11176 round_trippers.go:580]     Audit-Id: 9caee55e-1e0e-4442-a24a-fe4738072d42
	I0603 14:27:46.086060   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.086060   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.086060   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.086060   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09","resourceVersion":"298","creationTimestamp":"2024-06-03T14:27:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.150.195:2379","kubernetes.io/config.hash":"36433239452f37b4b0410f69c12da408","kubernetes.io/config.mirror":"36433239452f37b4b0410f69c12da408","kubernetes.io/config.seen":"2024-06-03T14:27:10.068477252Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0603 14:27:46.086060   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.087532   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.087532   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.087567   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.090139   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.090139   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.090139   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.090139   11176 round_trippers.go:580]     Audit-Id: 7138356e-d8c8-4714-8ff7-c5e091a5f973
	I0603 14:27:46.090491   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.090491   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.090491   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.090491   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.091118   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.091897   11176 pod_ready.go:92] pod "etcd-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.091952   11176 pod_ready.go:81] duration metric: took 8.4748ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.091952   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.092082   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-720500
	I0603 14:27:46.092134   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.092134   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.092165   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.094438   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.094438   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.094438   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.094438   11176 round_trippers.go:580]     Audit-Id: 94fa42b8-3067-4bd4-967c-d44a19ce20d4
	I0603 14:27:46.094438   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.094438   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.094438   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.094438   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.094438   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-720500","namespace":"kube-system","uid":"aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef","resourceVersion":"301","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.150.195:8443","kubernetes.io/config.hash":"2dc25f3659bb9b137f23bf9424dba20e","kubernetes.io/config.mirror":"2dc25f3659bb9b137f23bf9424dba20e","kubernetes.io/config.seen":"2024-06-03T14:27:18.382155538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0603 14:27:46.095480   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.095480   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.095480   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.095480   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.098381   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.098381   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.098381   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.098381   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.098381   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.098381   11176 round_trippers.go:580]     Audit-Id: f0e2e47a-86cf-4878-a7a4-fe470e426e27
	I0603 14:27:46.098381   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.098381   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.099852   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.100179   11176 pod_ready.go:92] pod "kube-apiserver-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.100179   11176 pod_ready.go:81] duration metric: took 8.2267ms for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.100179   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.100290   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:27:46.100362   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.100410   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.100410   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.102680   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.102680   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.103124   11176 round_trippers.go:580]     Audit-Id: 0d66f45d-8fcf-4b78-a3ff-aa6d8afe3bf7
	I0603 14:27:46.103124   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.103124   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.103124   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.103124   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.103124   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.103428   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"324","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0603 14:27:46.103825   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.103825   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.103825   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.103825   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.106459   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.106459   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.106459   11176 round_trippers.go:580]     Audit-Id: 475ee657-4c52-484f-b6bf-124c1a45cc49
	I0603 14:27:46.106459   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.106459   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.106459   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.106459   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.106459   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.107038   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.107444   11176 pod_ready.go:92] pod "kube-controller-manager-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.107444   11176 pod_ready.go:81] duration metric: took 7.2652ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.107502   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.107554   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:27:46.107612   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.107612   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.107663   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.113165   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:46.113165   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.113165   11176 round_trippers.go:580]     Audit-Id: cda6dd52-03f5-4d5b-880a-bfbc25229730
	I0603 14:27:46.113165   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.113317   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.113317   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.113317   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.113317   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.113938   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"406","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0603 14:27:46.114928   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.114983   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.114983   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.114983   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.118121   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:46.118121   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.118121   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.118121   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.118121   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.118121   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.118121   11176 round_trippers.go:580]     Audit-Id: 12d12825-7480-4f82-8ff3-fdbe563415db
	I0603 14:27:46.118121   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.118121   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.119055   11176 pod_ready.go:92] pod "kube-proxy-64l9x" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.119155   11176 pod_ready.go:81] duration metric: took 11.6527ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.119155   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.275704   11176 request.go:629] Waited for 156.18ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:27:46.275787   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:27:46.275787   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.275787   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.275787   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.278428   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:27:46.278428   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.278428   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.278428   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.278428   11176 round_trippers.go:580]     Audit-Id: 9b1e6269-b0b9-42a1-b20b-134fe11d86b2
	I0603 14:27:46.278428   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.278428   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.278428   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.279588   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"322","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0603 14:27:46.479711   11176 request.go:629] Waited for 199.285ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.479835   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:27:46.479835   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.479835   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.479835   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.485508   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:27:46.485508   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.485508   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.485508   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.485508   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.485508   11176 round_trippers.go:580]     Audit-Id: 39609ff6-01b4-4870-a2dd-5e0f00a4381d
	I0603 14:27:46.485508   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.485508   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.486100   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0603 14:27:46.486248   11176 pod_ready.go:92] pod "kube-scheduler-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:27:46.486248   11176 pod_ready.go:81] duration metric: took 367.0906ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:27:46.486248   11176 pod_ready.go:38] duration metric: took 3.4449519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:27:46.486248   11176 api_server.go:52] waiting for apiserver process to appear ...
	I0603 14:27:46.498816   11176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:27:46.526466   11176 command_runner.go:130] > 2048
	I0603 14:27:46.526466   11176 api_server.go:72] duration metric: took 13.5434004s to wait for apiserver process to appear ...
	I0603 14:27:46.526466   11176 api_server.go:88] waiting for apiserver healthz status ...
	I0603 14:27:46.526466   11176 api_server.go:253] Checking apiserver healthz at https://172.22.150.195:8443/healthz ...
	I0603 14:27:46.534939   11176 api_server.go:279] https://172.22.150.195:8443/healthz returned 200:
	ok
	I0603 14:27:46.535470   11176 round_trippers.go:463] GET https://172.22.150.195:8443/version
	I0603 14:27:46.535531   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.535575   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.535575   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.536778   11176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:27:46.536778   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.536778   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.536778   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.536778   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.536778   11176 round_trippers.go:580]     Content-Length: 263
	I0603 14:27:46.536778   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.536778   11176 round_trippers.go:580]     Audit-Id: 1e32a816-1661-445e-82ac-8b6b6f295cb2
	I0603 14:27:46.536778   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.536778   11176 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 14:27:46.536778   11176 api_server.go:141] control plane version: v1.30.1
	I0603 14:27:46.536778   11176 api_server.go:131] duration metric: took 10.312ms to wait for apiserver health ...
	I0603 14:27:46.536778   11176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 14:27:46.682540   11176 request.go:629] Waited for 145.7611ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:27:46.682758   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:27:46.682758   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.682758   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.682758   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.690145   11176 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:27:46.690145   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.690145   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.690145   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.690145   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.690145   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.690145   11176 round_trippers.go:580]     Audit-Id: 40544a74-626e-4d68-9080-a6f31a8997b6
	I0603 14:27:46.690145   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.692031   11176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"447","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0603 14:27:46.695267   11176 system_pods.go:59] 8 kube-system pods found
	I0603 14:27:46.695300   11176 system_pods.go:61] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "etcd-multinode-720500" [a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "kube-apiserver-multinode-720500" [aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:27:46.695300   11176 system_pods.go:61] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:27:46.695300   11176 system_pods.go:74] duration metric: took 158.5201ms to wait for pod list to return data ...
	I0603 14:27:46.695300   11176 default_sa.go:34] waiting for default service account to be created ...
	I0603 14:27:46.884089   11176 request.go:629] Waited for 188.5333ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/default/serviceaccounts
	I0603 14:27:46.884275   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/default/serviceaccounts
	I0603 14:27:46.884380   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:46.884380   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:46.884380   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:46.888808   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:46.888870   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:46.888870   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:46.888870   11176 round_trippers.go:580]     Content-Length: 261
	I0603 14:27:46.888870   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:46 GMT
	I0603 14:27:46.888870   11176 round_trippers.go:580]     Audit-Id: cd02b515-8382-4825-9582-9c169d16617f
	I0603 14:27:46.888870   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:46.888870   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:46.888870   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:46.888870   11176 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fbd8badf-59ec-4931-b3bf-13e96cb86c7b","resourceVersion":"352","creationTimestamp":"2024-06-03T14:27:32Z"}}]}
	I0603 14:27:46.889124   11176 default_sa.go:45] found service account: "default"
	I0603 14:27:46.889124   11176 default_sa.go:55] duration metric: took 193.8225ms for default service account to be created ...
	I0603 14:27:46.889124   11176 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 14:27:47.071258   11176 request.go:629] Waited for 182.1326ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:27:47.071581   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:27:47.071822   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:47.071883   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:47.071883   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:47.076297   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:27:47.076297   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:47.076297   11176 round_trippers.go:580]     Audit-Id: afc10929-2034-4da3-ade0-cda7d5312785
	I0603 14:27:47.076297   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:47.076297   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:47.076297   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:47.076297   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:47.076297   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:47 GMT
	I0603 14:27:47.077709   11176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"447","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0603 14:27:47.080674   11176 system_pods.go:86] 8 kube-system pods found
	I0603 14:27:47.080674   11176 system_pods.go:89] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "etcd-multinode-720500" [a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "kube-apiserver-multinode-720500" [aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:27:47.080758   11176 system_pods.go:89] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:27:47.080856   11176 system_pods.go:126] duration metric: took 191.6332ms to wait for k8s-apps to be running ...
	I0603 14:27:47.080856   11176 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 14:27:47.092515   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:27:47.119682   11176 system_svc.go:56] duration metric: took 38.8255ms WaitForService to wait for kubelet
	I0603 14:27:47.119682   11176 kubeadm.go:576] duration metric: took 14.1366111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:27:47.121915   11176 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:27:47.274432   11176 request.go:629] Waited for 152.4292ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes
	I0603 14:27:47.274432   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes
	I0603 14:27:47.274432   11176 round_trippers.go:469] Request Headers:
	I0603 14:27:47.274432   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:27:47.274432   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:27:47.278214   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:27:47.278300   11176 round_trippers.go:577] Response Headers:
	I0603 14:27:47.278300   11176 round_trippers.go:580]     Audit-Id: 06f2d9ac-1371-4ff1-b541-c1fc8d677a87
	I0603 14:27:47.278300   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:27:47.278300   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:27:47.278300   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:27:47.278300   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:27:47.278300   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:27:47 GMT
	I0603 14:27:47.278631   11176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"426","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0603 14:27:47.279234   11176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:27:47.279401   11176 node_conditions.go:123] node cpu capacity is 2
	I0603 14:27:47.279401   11176 node_conditions.go:105] duration metric: took 157.4849ms to run NodePressure ...
	I0603 14:27:47.279401   11176 start.go:240] waiting for startup goroutines ...
	I0603 14:27:47.279401   11176 start.go:245] waiting for cluster config update ...
	I0603 14:27:47.279496   11176 start.go:254] writing updated cluster config ...
	I0603 14:27:47.284145   11176 out.go:177] 
	I0603 14:27:47.287369   11176 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:27:47.294998   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:27:47.294998   11176 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:27:47.304524   11176 out.go:177] * Starting "multinode-720500-m02" worker node in "multinode-720500" cluster
	I0603 14:27:47.306396   11176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:27:47.306396   11176 cache.go:56] Caching tarball of preloaded images
	I0603 14:27:47.307326   11176 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:27:47.307326   11176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:27:47.307326   11176 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:27:47.311963   11176 start.go:360] acquireMachinesLock for multinode-720500-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:27:47.312153   11176 start.go:364] duration metric: took 93.1µs to acquireMachinesLock for "multinode-720500-m02"
	I0603 14:27:47.312302   11176 start.go:93] Provisioning new machine with config: &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 14:27:47.312560   11176 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0603 14:27:47.315969   11176 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 14:27:47.315969   11176 start.go:159] libmachine.API.Create for "multinode-720500" (driver="hyperv")
	I0603 14:27:47.316518   11176 client.go:168] LocalClient.Create starting
	I0603 14:27:47.316694   11176 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0603 14:27:47.317293   11176 main.go:141] libmachine: Decoding PEM data...
	I0603 14:27:47.317423   11176 main.go:141] libmachine: Parsing certificate...
	I0603 14:27:47.317529   11176 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0603 14:27:47.317529   11176 main.go:141] libmachine: Decoding PEM data...
	I0603 14:27:47.317529   11176 main.go:141] libmachine: Parsing certificate...
	I0603 14:27:47.317529   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0603 14:27:49.206946   11176 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0603 14:27:49.207326   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:49.207326   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0603 14:27:50.934858   11176 main.go:141] libmachine: [stdout =====>] : False
	
	I0603 14:27:50.934858   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:50.935470   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 14:27:52.448204   11176 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 14:27:52.448204   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:52.448300   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 14:27:56.167240   11176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 14:27:56.167240   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:56.170083   11176 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 14:27:56.646346   11176 main.go:141] libmachine: Creating SSH key...
	I0603 14:27:56.937366   11176 main.go:141] libmachine: Creating VM...
	I0603 14:27:56.937366   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0603 14:27:59.917140   11176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0603 14:27:59.917215   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:27:59.917215   11176 main.go:141] libmachine: Using switch "Default Switch"
	I0603 14:27:59.917215   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0603 14:28:01.710372   11176 main.go:141] libmachine: [stdout =====>] : True
	
	I0603 14:28:01.710500   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:01.710500   11176 main.go:141] libmachine: Creating VHD
	I0603 14:28:01.710500   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0603 14:28:05.572365   11176 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1AF8C33E-D0B9-4856-9092-26A0FED7DD27
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0603 14:28:05.572689   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:05.572689   11176 main.go:141] libmachine: Writing magic tar header
	I0603 14:28:05.572837   11176 main.go:141] libmachine: Writing SSH key tar header
	I0603 14:28:05.581817   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0603 14:28:08.812206   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:08.812206   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:08.815250   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\disk.vhd' -SizeBytes 20000MB
	I0603 14:28:11.375221   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:11.376322   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:11.376322   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-720500-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0603 14:28:15.086653   11176 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-720500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0603 14:28:15.086653   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:15.086653   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-720500-m02 -DynamicMemoryEnabled $false
	I0603 14:28:17.403882   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:17.403882   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:17.404745   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-720500-m02 -Count 2
	I0603 14:28:19.620954   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:19.621583   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:19.621583   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-720500-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\boot2docker.iso'
	I0603 14:28:22.275490   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:22.275490   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:22.275641   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-720500-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\disk.vhd'
	I0603 14:28:24.917447   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:24.917509   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:24.917509   11176 main.go:141] libmachine: Starting VM...
	I0603 14:28:24.917509   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500-m02
	I0603 14:28:28.032087   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:28.032087   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:28.032087   11176 main.go:141] libmachine: Waiting for host to start...
	I0603 14:28:28.033041   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:30.398575   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:30.398575   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:30.398668   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:28:32.985262   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:32.986212   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:33.999138   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:36.255627   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:36.255627   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:36.255627   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:28:38.891075   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:38.891075   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:39.897403   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:42.168443   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:42.168443   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:42.168443   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:28:44.742515   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:44.742612   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:45.758107   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:48.004995   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:48.004995   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:48.005203   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:28:50.596679   11176 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:28:50.596679   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:51.606063   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:53.842776   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:53.842877   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:53.843006   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:28:56.441976   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:28:56.441976   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:56.442209   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:28:58.589143   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:28:58.589143   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:28:58.589720   11176 machine.go:94] provisionDockerMachine start ...
	I0603 14:28:58.589810   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:00.793983   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:00.793983   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:00.793983   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:03.369667   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:03.370008   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:03.374497   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:03.386866   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:03.386933   11176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:29:03.508029   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:29:03.508029   11176 buildroot.go:166] provisioning hostname "multinode-720500-m02"
	I0603 14:29:03.508029   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:05.668849   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:05.669402   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:05.669474   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:08.260757   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:08.260757   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:08.266643   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:08.266718   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:08.266718   11176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500-m02 && echo "multinode-720500-m02" | sudo tee /etc/hostname
	I0603 14:29:08.420799   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500-m02
	
	I0603 14:29:08.420959   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:10.629733   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:10.629733   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:10.630362   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:13.208366   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:13.208366   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:13.214803   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:13.214958   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:13.214958   11176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:29:13.353176   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:29:13.353176   11176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:29:13.353176   11176 buildroot.go:174] setting up certificates
	I0603 14:29:13.353176   11176 provision.go:84] configureAuth start
	I0603 14:29:13.353176   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:15.585981   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:15.585981   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:15.585981   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:18.142126   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:18.142126   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:18.142649   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:20.315516   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:20.315516   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:20.316533   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:22.898531   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:22.898531   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:22.898707   11176 provision.go:143] copyHostCerts
	I0603 14:29:22.898707   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:29:22.898707   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:29:22.898707   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:29:22.899577   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:29:22.900720   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:29:22.901043   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:29:22.901043   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:29:22.901444   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:29:22.902345   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:29:22.902670   11176 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:29:22.902670   11176 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:29:22.903211   11176 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:29:22.904127   11176 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500-m02 san=[127.0.0.1 172.22.146.196 localhost minikube multinode-720500-m02]
	I0603 14:29:23.139962   11176 provision.go:177] copyRemoteCerts
	I0603 14:29:23.152946   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:29:23.152946   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:25.314644   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:25.314697   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:25.314697   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:27.886445   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:27.886445   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:27.886696   11176 sshutil.go:53] new ssh client: &{IP:172.22.146.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:29:27.986632   11176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.833646s)
	I0603 14:29:27.986632   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:29:27.986632   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:29:28.037015   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:29:28.037245   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 14:29:28.090186   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:29:28.090950   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 14:29:28.138726   11176 provision.go:87] duration metric: took 14.7854284s to configureAuth
	I0603 14:29:28.138726   11176 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:29:28.139428   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:29:28.139506   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:30.312449   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:30.313068   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:30.313068   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:32.920012   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:32.920327   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:32.926167   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:32.926435   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:32.926435   11176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:29:33.045366   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:29:33.045366   11176 buildroot.go:70] root file system type: tmpfs
	I0603 14:29:33.045718   11176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:29:33.045718   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:35.225021   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:35.225021   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:35.225132   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:37.835317   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:37.835317   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:37.840947   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:37.841747   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:37.841747   11176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.150.195"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:29:37.997391   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.150.195
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:29:37.997440   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:40.140128   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:40.140731   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:40.140731   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:42.699146   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:42.699146   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:42.705277   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:42.705605   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:42.705605   11176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:29:44.845578   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:29:44.845633   11176 machine.go:97] duration metric: took 46.2555337s to provisionDockerMachine
	I0603 14:29:44.845686   11176 client.go:171] duration metric: took 1m57.5282048s to LocalClient.Create
	I0603 14:29:44.845686   11176 start.go:167] duration metric: took 1m57.528753s to libmachine.API.Create "multinode-720500"
	I0603 14:29:44.845772   11176 start.go:293] postStartSetup for "multinode-720500-m02" (driver="hyperv")
	I0603 14:29:44.845772   11176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:29:44.860326   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:29:44.860326   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:47.029542   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:47.030290   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:47.031703   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:49.636980   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:49.636980   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:49.637899   11176 sshutil.go:53] new ssh client: &{IP:172.22.146.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:29:49.744467   11176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8831017s)
	I0603 14:29:49.756173   11176 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:29:49.763075   11176 command_runner.go:130] > NAME=Buildroot
	I0603 14:29:49.763075   11176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:29:49.763075   11176 command_runner.go:130] > ID=buildroot
	I0603 14:29:49.763075   11176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:29:49.763075   11176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:29:49.763465   11176 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:29:49.763484   11176 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:29:49.763906   11176 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:29:49.764272   11176 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:29:49.764848   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:29:49.778852   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:29:49.798547   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:29:49.845350   11176 start.go:296] duration metric: took 4.9995363s for postStartSetup
	I0603 14:29:49.848530   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:51.996615   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:51.997252   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:51.997252   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:54.561212   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:54.561486   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:54.561785   11176 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:29:54.564686   11176 start.go:128] duration metric: took 2m7.2510829s to createHost
	I0603 14:29:54.564686   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:29:56.717588   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:29:56.717588   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:56.718196   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:29:59.276069   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:29:59.276297   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:29:59.282519   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:29:59.283208   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:29:59.283208   11176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:29:59.417419   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717424999.423672894
	
	I0603 14:29:59.417419   11176 fix.go:216] guest clock: 1717424999.423672894
	I0603 14:29:59.417537   11176 fix.go:229] Guest: 2024-06-03 14:29:59.423672894 +0000 UTC Remote: 2024-06-03 14:29:54.5646869 +0000 UTC m=+343.422458701 (delta=4.858985994s)
	I0603 14:29:59.417537   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:30:01.623233   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:01.623463   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:01.623578   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:30:04.214541   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:30:04.214541   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:04.220135   11176 main.go:141] libmachine: Using SSH client type: native
	I0603 14:30:04.220953   11176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.146.196 22 <nil> <nil>}
	I0603 14:30:04.220953   11176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717424999
	I0603 14:30:04.359539   11176 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:29:59 UTC 2024
	
	I0603 14:30:04.359539   11176 fix.go:236] clock set: Mon Jun  3 14:29:59 UTC 2024
	 (err=<nil>)
	I0603 14:30:04.359539   11176 start.go:83] releasing machines lock for "multinode-720500-m02", held for 2m17.0462213s
	I0603 14:30:04.359803   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:30:06.502493   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:06.502493   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:06.503098   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:30:09.112088   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:30:09.112412   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:09.116388   11176 out.go:177] * Found network options:
	I0603 14:30:09.118707   11176 out.go:177]   - NO_PROXY=172.22.150.195
	W0603 14:30:09.121616   11176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:30:09.123943   11176 out.go:177]   - NO_PROXY=172.22.150.195
	W0603 14:30:09.126239   11176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 14:30:09.127546   11176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:30:09.129948   11176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:30:09.129948   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:30:09.141272   11176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:30:09.141272   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:30:11.373456   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:11.373546   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:11.373626   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:30:11.387272   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:11.387272   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:11.387272   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:30:14.079514   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:30:14.079514   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:14.079666   11176 sshutil.go:53] new ssh client: &{IP:172.22.146.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:30:14.105892   11176 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:30:14.106003   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:14.106256   11176 sshutil.go:53] new ssh client: &{IP:172.22.146.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:30:14.171063   11176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 14:30:14.171763   11176 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0304502s)
	W0603 14:30:14.171905   11176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:30:14.186508   11176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:30:14.276700   11176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:30:14.276802   11176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:30:14.276802   11176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:30:14.276802   11176 start.go:494] detecting cgroup driver to use...
	I0603 14:30:14.277062   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:30:14.277142   11176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.147072s)
	I0603 14:30:14.314171   11176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:30:14.329393   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:30:14.366557   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:30:14.388312   11176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:30:14.404068   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:30:14.442788   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:30:14.475497   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:30:14.508635   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:30:14.547977   11176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:30:14.583051   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:30:14.614078   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:30:14.652698   11176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:30:14.685036   11176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:30:14.702963   11176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:30:14.715582   11176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:30:14.747697   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:14.951097   11176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:30:14.992793   11176 start.go:494] detecting cgroup driver to use...
	I0603 14:30:15.006984   11176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:30:15.029971   11176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:30:15.029971   11176 command_runner.go:130] > [Unit]
	I0603 14:30:15.029971   11176 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:30:15.029971   11176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:30:15.029971   11176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:30:15.029971   11176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:30:15.029971   11176 command_runner.go:130] > StartLimitBurst=3
	I0603 14:30:15.029971   11176 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:30:15.029971   11176 command_runner.go:130] > [Service]
	I0603 14:30:15.029971   11176 command_runner.go:130] > Type=notify
	I0603 14:30:15.029971   11176 command_runner.go:130] > Restart=on-failure
	I0603 14:30:15.029971   11176 command_runner.go:130] > Environment=NO_PROXY=172.22.150.195
	I0603 14:30:15.029971   11176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:30:15.029971   11176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:30:15.029971   11176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:30:15.029971   11176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:30:15.029971   11176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:30:15.029971   11176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:30:15.029971   11176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:30:15.029971   11176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:30:15.029971   11176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:30:15.029971   11176 command_runner.go:130] > ExecStart=
	I0603 14:30:15.029971   11176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:30:15.029971   11176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:30:15.029971   11176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:30:15.029971   11176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:30:15.029971   11176 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:30:15.029971   11176 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:30:15.029971   11176 command_runner.go:130] > LimitCORE=infinity
	I0603 14:30:15.029971   11176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:30:15.029971   11176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:30:15.029971   11176 command_runner.go:130] > TasksMax=infinity
	I0603 14:30:15.029971   11176 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:30:15.029971   11176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:30:15.029971   11176 command_runner.go:130] > Delegate=yes
	I0603 14:30:15.029971   11176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:30:15.029971   11176 command_runner.go:130] > KillMode=process
	I0603 14:30:15.029971   11176 command_runner.go:130] > [Install]
	I0603 14:30:15.029971   11176 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:30:15.043078   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:30:15.083047   11176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:30:15.125248   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:30:15.162743   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:30:15.198889   11176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:30:15.261499   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:30:15.285185   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:30:15.318077   11176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:30:15.331070   11176 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:30:15.336039   11176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:30:15.348571   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:30:15.366650   11176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:30:15.411045   11176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:30:15.609808   11176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:30:15.793408   11176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:30:15.793408   11176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:30:15.838496   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:16.034619   11176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:30:18.548241   11176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5134836s)
	I0603 14:30:18.564522   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:30:18.601365   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:30:18.637516   11176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:30:18.842012   11176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:30:19.049659   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:19.266263   11176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:30:19.311066   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:30:19.346901   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:19.545301   11176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:30:19.659262   11176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:30:19.672481   11176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:30:19.684284   11176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:30:19.684284   11176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:30:19.684284   11176 command_runner.go:130] > Device: 0,22	Inode: 880         Links: 1
	I0603 14:30:19.684284   11176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:30:19.684284   11176 command_runner.go:130] > Access: 2024-06-03 14:30:19.578368898 +0000
	I0603 14:30:19.684284   11176 command_runner.go:130] > Modify: 2024-06-03 14:30:19.578368898 +0000
	I0603 14:30:19.684284   11176 command_runner.go:130] > Change: 2024-06-03 14:30:19.582368899 +0000
	I0603 14:30:19.684284   11176 command_runner.go:130] >  Birth: -
	I0603 14:30:19.684284   11176 start.go:562] Will wait 60s for crictl version
	I0603 14:30:19.696251   11176 ssh_runner.go:195] Run: which crictl
	I0603 14:30:19.703024   11176 command_runner.go:130] > /usr/bin/crictl
	I0603 14:30:19.715488   11176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:30:19.767161   11176 command_runner.go:130] > Version:  0.1.0
	I0603 14:30:19.767161   11176 command_runner.go:130] > RuntimeName:  docker
	I0603 14:30:19.767161   11176 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:30:19.767161   11176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:30:19.767289   11176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:30:19.775558   11176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:30:19.807149   11176 command_runner.go:130] > 26.0.2
	I0603 14:30:19.816315   11176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:30:19.846476   11176 command_runner.go:130] > 26.0.2
	I0603 14:30:19.849873   11176 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:30:19.855276   11176 out.go:177]   - env NO_PROXY=172.22.150.195
	I0603 14:30:19.857596   11176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:30:19.861339   11176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:30:19.861339   11176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:30:19.861339   11176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:30:19.861339   11176 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:30:19.865844   11176 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:30:19.865911   11176 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:30:19.879526   11176 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:30:19.885644   11176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:30:19.911109   11176 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:30:19.911718   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:30:19.912362   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:30:22.066410   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:22.066410   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:22.066410   11176 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:30:22.067489   11176 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.146.196
	I0603 14:30:22.067489   11176 certs.go:194] generating shared ca certs ...
	I0603 14:30:22.067637   11176 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:30:22.068223   11176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:30:22.068494   11176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:30:22.068807   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:30:22.069084   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:30:22.069222   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:30:22.069359   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:30:22.069852   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:30:22.070226   11176 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:30:22.070311   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:30:22.070383   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:30:22.070383   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:30:22.071109   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:30:22.071544   11176 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:30:22.071804   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:30:22.072006   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:30:22.072128   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:30:22.072228   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:30:22.125679   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:30:22.178543   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:30:22.227298   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:30:22.277417   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:30:22.324881   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:30:22.370394   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:30:22.428362   11176 ssh_runner.go:195] Run: openssl version
	I0603 14:30:22.436855   11176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:30:22.448408   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:30:22.479892   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:30:22.486436   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:30:22.486436   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:30:22.497566   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:30:22.507061   11176 command_runner.go:130] > b5213941
	I0603 14:30:22.518920   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:30:22.549859   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:30:22.584961   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:30:22.594260   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:30:22.594260   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:30:22.607308   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:30:22.616480   11176 command_runner.go:130] > 51391683
	I0603 14:30:22.628355   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:30:22.658712   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:30:22.691629   11176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:30:22.698152   11176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:30:22.698152   11176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:30:22.710272   11176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:30:22.720079   11176 command_runner.go:130] > 3ec20f2e
	I0603 14:30:22.731967   11176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:30:22.767794   11176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:30:22.774292   11176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:30:22.775125   11176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:30:22.775125   11176 kubeadm.go:928] updating node {m02 172.22.146.196 8443 v1.30.1 docker false true} ...
	I0603 14:30:22.775125   11176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.146.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:30:22.787211   11176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:30:22.806866   11176 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0603 14:30:22.806866   11176 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 14:30:22.820023   11176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 14:30:22.838284   11176 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 14:30:22.838345   11176 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 14:30:22.838345   11176 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 14:30:22.838566   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 14:30:22.838566   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 14:30:22.856078   11176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 14:30:22.856078   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:30:22.856460   11176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 14:30:22.868365   11176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 14:30:22.868517   11176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 14:30:22.868688   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 14:30:22.895529   11176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 14:30:22.895529   11176 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 14:30:22.895898   11176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 14:30:22.896092   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 14:30:22.908434   11176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 14:30:22.949969   11176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 14:30:22.950019   11176 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 14:30:22.950019   11176 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 14:30:24.095683   11176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 14:30:24.120623   11176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0603 14:30:24.157610   11176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:30:24.205470   11176 ssh_runner.go:195] Run: grep 172.22.150.195	control-plane.minikube.internal$ /etc/hosts
	I0603 14:30:24.211621   11176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.150.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:30:24.245318   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:24.449915   11176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:30:24.480306   11176 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:30:24.480958   11176 start.go:316] joinCluster: &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:30:24.481155   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 14:30:24.481201   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:30:26.679811   11176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:30:26.680877   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:26.680948   11176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:30:29.296735   11176 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:30:29.296735   11176 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:30:29.297332   11176 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:30:29.530722   11176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vf1si8.3tvtbbjgta7m95g0 --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f 
	I0603 14:30:29.530856   11176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0496603s)
	I0603 14:30:29.530978   11176 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 14:30:29.531058   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vf1si8.3tvtbbjgta7m95g0 --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-720500-m02"
	I0603 14:30:29.747756   11176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 14:30:30.647290   11176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0603 14:30:30.647358   11176 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0603 14:30:30.647358   11176 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0603 14:30:30.647358   11176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:30:30.647358   11176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:30:30.647358   11176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 14:30:30.647480   11176 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 14:30:30.647480   11176 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.767728ms
	I0603 14:30:30.647544   11176 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0603 14:30:30.647566   11176 command_runner.go:130] > This node has joined the cluster:
	I0603 14:30:30.647566   11176 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0603 14:30:30.647566   11176 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0603 14:30:30.647566   11176 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0603 14:30:30.647631   11176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vf1si8.3tvtbbjgta7m95g0 --discovery-token-ca-cert-hash sha256:63ed45109148d1aa8fb611949c54e151345ad9420412954bb2b895209f43d47f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-720500-m02": (1.1164988s)
	I0603 14:30:30.647683   11176 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 14:30:30.856547   11176 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0603 14:30:31.062076   11176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-720500-m02 minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=multinode-720500 minikube.k8s.io/primary=false
	I0603 14:30:31.205217   11176 command_runner.go:130] > node/multinode-720500-m02 labeled
	I0603 14:30:31.205429   11176 start.go:318] duration metric: took 6.724416s to joinCluster
	I0603 14:30:31.205611   11176 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 14:30:31.206447   11176 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:30:31.209450   11176 out.go:177] * Verifying Kubernetes components...
	I0603 14:30:31.222889   11176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:30:31.433557   11176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:30:31.463245   11176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:30:31.464386   11176 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.150.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:30:31.465686   11176 node_ready.go:35] waiting up to 6m0s for node "multinode-720500-m02" to be "Ready" ...
	I0603 14:30:31.465993   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:31.466035   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:31.466035   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:31.466149   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:31.478612   11176 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0603 14:30:31.478612   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:31.478612   11176 round_trippers.go:580]     Audit-Id: 3c4dc37c-74a6-435e-b602-d3e08dd25245
	I0603 14:30:31.478612   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:31.478612   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:31.478612   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:31.478612   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:31.478947   11176 round_trippers.go:580]     Content-Length: 3921
	I0603 14:30:31.478947   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:31 GMT
	I0603 14:30:31.479044   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"612","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0603 14:30:31.973647   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:31.973647   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:31.973647   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:31.973647   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:31.977065   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:31.977298   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:31.977298   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:31.977298   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:31.977298   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:31.977298   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:31 GMT
	I0603 14:30:31.977298   11176 round_trippers.go:580]     Audit-Id: 2e4971ee-a225-472a-adb5-aa6e4c8ec9f3
	I0603 14:30:31.977298   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:31.977298   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:31.977298   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:32.473405   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:32.473470   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:32.473470   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:32.473470   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:32.476074   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:32.476074   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:32.476074   11176 round_trippers.go:580]     Audit-Id: dc103ceb-3ae7-4dab-be28-869e2a3c52aa
	I0603 14:30:32.476074   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:32.476074   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:32.476074   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:32.476074   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:32.476074   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:32.476074   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:32 GMT
	I0603 14:30:32.477174   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:32.976429   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:32.976429   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:32.976429   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:32.976542   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:32.980777   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:32.980777   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:32.980777   11176 round_trippers.go:580]     Audit-Id: f1516676-e99d-4874-b312-ee694b390b8f
	I0603 14:30:32.980777   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:32.980777   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:32.980777   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:32.980777   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:32.980777   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:32.980777   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:32 GMT
	I0603 14:30:32.981298   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:33.478563   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:33.478705   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:33.478705   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:33.478705   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:33.484902   11176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:30:33.484902   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:33.484902   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:33.484902   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:33.484902   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:33.484902   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:33 GMT
	I0603 14:30:33.484902   11176 round_trippers.go:580]     Audit-Id: f5c21d18-986d-4a5b-8012-9b44d41f07de
	I0603 14:30:33.484902   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:33.484902   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:33.484902   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:33.485646   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:33.979582   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:33.979806   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:33.979806   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:33.979806   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:33.984233   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:33.984462   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:33.984462   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:33.984462   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:33.984462   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:33.984462   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:33.984462   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:33 GMT
	I0603 14:30:33.984462   11176 round_trippers.go:580]     Audit-Id: 75661d59-b627-4909-81f4-54242594436a
	I0603 14:30:33.984462   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:33.984672   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:34.480594   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:34.480594   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:34.480791   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:34.480791   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:34.484659   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:34.484659   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:34.484659   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:34.484659   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:34.484659   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:34.484659   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:34 GMT
	I0603 14:30:34.484659   11176 round_trippers.go:580]     Audit-Id: 653cc57f-8d46-4bb0-a600-7348df397410
	I0603 14:30:34.485048   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:34.485048   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:34.485211   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:34.969465   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:34.969465   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:34.969465   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:34.969689   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:34.973985   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:34.974227   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:34.974227   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:34 GMT
	I0603 14:30:34.974227   11176 round_trippers.go:580]     Audit-Id: 66f3ef66-1296-4f69-916f-70c9aa925b08
	I0603 14:30:34.974227   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:34.974227   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:34.974227   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:34.974227   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:34.974227   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:34.974335   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:35.467245   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:35.467387   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:35.467459   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:35.467459   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:35.474479   11176 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:30:35.474479   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:35.474479   11176 round_trippers.go:580]     Audit-Id: 210fa48f-5a84-4cb3-9ff3-290bea908065
	I0603 14:30:35.474479   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:35.474479   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:35.474479   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:35.474479   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:35.474479   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:35.474479   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:35 GMT
	I0603 14:30:35.474479   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:35.966534   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:35.966701   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:35.966765   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:35.966765   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:35.971243   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:35.971562   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:35.971562   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:35.971562   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:35.971630   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:35.971630   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:35.971630   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:35 GMT
	I0603 14:30:35.971630   11176 round_trippers.go:580]     Audit-Id: da4bb093-4bc3-4d94-94e3-66f56715798f
	I0603 14:30:35.971630   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:35.971842   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:35.972287   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:36.473159   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:36.473159   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:36.473159   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:36.473159   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:36.478544   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:30:36.478544   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:36.478544   11176 round_trippers.go:580]     Audit-Id: a8001cd6-e619-47d0-ba5c-d9de679bd636
	I0603 14:30:36.478544   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:36.478544   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:36.478544   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:36.478544   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:36.478544   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:36.478544   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:36 GMT
	I0603 14:30:36.478811   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:36.974603   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:36.974713   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:36.974713   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:36.974807   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:36.983137   11176 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:30:36.983137   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:36.983391   11176 round_trippers.go:580]     Audit-Id: 73d59126-2d68-4cf3-bc86-6729ad05a50b
	I0603 14:30:36.983391   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:36.983391   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:36.983391   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:36.983391   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:36.983391   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:36.983391   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:36 GMT
	I0603 14:30:36.983391   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:37.467080   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:37.467080   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:37.467080   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:37.467080   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:37.472010   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:37.472104   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:37.472104   11176 round_trippers.go:580]     Audit-Id: 322898c9-abb0-4a6e-9c19-36b66f353352
	I0603 14:30:37.472935   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:37.472935   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:37.472935   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:37.472935   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:37.472935   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:37.472935   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:37 GMT
	I0603 14:30:37.473136   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:37.968557   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:37.968774   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:37.968774   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:37.968774   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:37.971819   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:37.971908   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:37.971908   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:37.971908   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:37 GMT
	I0603 14:30:37.971908   11176 round_trippers.go:580]     Audit-Id: 328818c6-c5a5-4f13-bc2b-d32b031f2220
	I0603 14:30:37.971908   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:37.971908   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:37.971908   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:37.971980   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:37.972275   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:37.972824   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:38.477391   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:38.477391   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:38.477391   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:38.477513   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:38.480266   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:38.481142   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:38.481142   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:38.481142   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:38.481142   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:38 GMT
	I0603 14:30:38.481142   11176 round_trippers.go:580]     Audit-Id: 7909fdfd-4da5-4fc7-be8a-fa60e40a29bb
	I0603 14:30:38.481142   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:38.481142   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:38.481201   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:38.481267   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:38.966876   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:38.966907   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:38.966907   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:38.966981   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:38.970641   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:38.971577   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:38.971577   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:38.971577   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:38.971577   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:38 GMT
	I0603 14:30:38.971577   11176 round_trippers.go:580]     Audit-Id: 5c2f5ecd-a287-4f03-aaca-baa833f14003
	I0603 14:30:38.971642   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:38.971642   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:38.971642   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:38.971806   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:39.475369   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:39.475561   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:39.475561   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:39.475561   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:39.478806   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:39.478806   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:39.478806   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:39.478806   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:39.478806   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:39.478806   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:39.478806   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:39 GMT
	I0603 14:30:39.478806   11176 round_trippers.go:580]     Audit-Id: 5fffb260-f3b8-48b4-bee1-a811baca9522
	I0603 14:30:39.478806   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:39.478806   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:39.979210   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:39.979278   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:39.979278   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:39.979278   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:39.983528   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:39.983528   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:39.984127   11176 round_trippers.go:580]     Audit-Id: e0a3c99c-a8a8-43fe-9437-4e1eef494973
	I0603 14:30:39.984127   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:39.984127   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:39.984127   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:39.984127   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:39.984127   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:39.984127   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:39 GMT
	I0603 14:30:39.984399   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:39.984797   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:40.471908   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:40.471908   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:40.472001   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:40.472001   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:40.475311   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:40.475943   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:40.475943   11176 round_trippers.go:580]     Audit-Id: 899d58ad-b7db-4354-b59c-0fb0500e1a3d
	I0603 14:30:40.475943   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:40.475943   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:40.476017   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:40.476017   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:40.476017   11176 round_trippers.go:580]     Content-Length: 4030
	I0603 14:30:40.476017   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:40 GMT
	I0603 14:30:40.476132   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"615","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0603 14:30:40.976005   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:40.976005   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:40.976005   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:40.976005   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:40.979599   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:40.979599   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:40.979599   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:40.979599   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:40.979599   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:40.979599   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:40.979599   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:40 GMT
	I0603 14:30:40.980412   11176 round_trippers.go:580]     Audit-Id: 8a5a6def-aa1a-4efd-b85e-b84f56f46652
	I0603 14:30:40.980822   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:41.476204   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:41.476418   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:41.476418   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:41.476418   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:41.480546   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:41.480546   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:41.480546   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:41 GMT
	I0603 14:30:41.480546   11176 round_trippers.go:580]     Audit-Id: e37afe67-f88c-45b9-b173-ad77e136868e
	I0603 14:30:41.480546   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:41.480546   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:41.480546   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:41.480546   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:41.481460   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:41.967274   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:41.967274   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:41.967274   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:41.967274   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:41.975298   11176 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:30:41.975334   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:41.975334   11176 round_trippers.go:580]     Audit-Id: 7ff70d0f-1276-4709-9daf-ebf3bcd5bf4e
	I0603 14:30:41.975334   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:41.975334   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:41.975334   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:41.975334   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:41.975334   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:41 GMT
	I0603 14:30:41.975837   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:42.475497   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:42.475497   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:42.475497   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:42.475497   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:42.483026   11176 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:30:42.483026   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:42.484046   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:42.484046   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:42.484046   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:42.484046   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:42.484046   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:42 GMT
	I0603 14:30:42.484046   11176 round_trippers.go:580]     Audit-Id: db16e6f6-16f1-4a57-b3c9-ce4a6d01d989
	I0603 14:30:42.487015   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:42.487557   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:42.966910   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:42.966910   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:42.966910   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:42.966910   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:42.971405   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:42.971405   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:42.971601   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:42.971601   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:42.971601   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:42.971601   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:42.971601   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:42 GMT
	I0603 14:30:42.971601   11176 round_trippers.go:580]     Audit-Id: 94c3c111-b3d4-40e0-84db-69b4cca2dd0f
	I0603 14:30:42.971902   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:43.473071   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:43.473071   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:43.473071   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:43.473071   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:43.479745   11176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:30:43.479745   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:43.479745   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:43.479745   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:43.479745   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:43 GMT
	I0603 14:30:43.479745   11176 round_trippers.go:580]     Audit-Id: d3b571ce-c528-494d-86af-86ecad4313ad
	I0603 14:30:43.479745   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:43.479745   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:43.479745   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:43.980940   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:43.981007   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:43.981007   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:43.981007   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:43.984870   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:43.984870   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:43.984870   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:43 GMT
	I0603 14:30:43.984870   11176 round_trippers.go:580]     Audit-Id: 376b44bd-c46c-48e9-8994-f1ab8871419e
	I0603 14:30:43.984870   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:43.984870   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:43.984870   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:43.984870   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:43.985892   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:44.476795   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:44.476795   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:44.476795   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:44.476795   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:44.480836   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:44.480836   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:44.480836   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:44.481069   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:44.481069   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:44 GMT
	I0603 14:30:44.481069   11176 round_trippers.go:580]     Audit-Id: 2e9863c7-f9c2-449e-9314-d41514f680b4
	I0603 14:30:44.481069   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:44.481069   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:44.481338   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:44.968253   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:44.968427   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:44.968427   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:44.968427   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:44.971807   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:44.971807   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:44.971807   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:44.972569   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:44.972569   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:44 GMT
	I0603 14:30:44.972569   11176 round_trippers.go:580]     Audit-Id: af5adbd4-df10-4dda-8a89-a61d4d0c00aa
	I0603 14:30:44.972569   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:44.972569   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:44.972846   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:44.973397   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:45.475668   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:45.475668   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:45.475668   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:45.475668   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:45.479237   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:45.479854   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:45.479940   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:45.479940   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:45.479940   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:45.479940   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:45.479940   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:45 GMT
	I0603 14:30:45.479940   11176 round_trippers.go:580]     Audit-Id: 27de2360-fe97-4bae-b87d-61b0d79ce1c3
	I0603 14:30:45.479940   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:45.981957   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:45.982263   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:45.982263   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:45.982263   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:45.985662   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:45.985662   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:45.985662   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:45 GMT
	I0603 14:30:45.986440   11176 round_trippers.go:580]     Audit-Id: bf43d5d3-ac49-4d82-b057-718aca0233b2
	I0603 14:30:45.986440   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:45.986440   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:45.986440   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:45.986440   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:45.986710   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:46.469708   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:46.469708   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:46.469708   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:46.469708   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:46.620106   11176 round_trippers.go:574] Response Status: 200 OK in 150 milliseconds
	I0603 14:30:46.620307   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:46.620307   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:46.620307   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:46.620307   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:46.620307   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:46.620307   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:46 GMT
	I0603 14:30:46.620456   11176 round_trippers.go:580]     Audit-Id: b8040cef-835a-430d-be28-8fa18f347c50
	I0603 14:30:46.620704   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:46.969672   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:46.969745   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:46.969745   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:46.969745   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:46.973309   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:46.973309   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:46.973309   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:46.973309   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:46.973309   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:46.973442   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:46.973442   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:46 GMT
	I0603 14:30:46.973442   11176 round_trippers.go:580]     Audit-Id: 368e407e-98b7-42a9-87f3-445644fc334b
	I0603 14:30:46.973922   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:46.974262   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:47.471037   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:47.471223   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:47.471223   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:47.471223   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:47.475020   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:47.475128   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:47.475128   11176 round_trippers.go:580]     Audit-Id: 94695f42-2292-4197-9640-20f3aec21fe1
	I0603 14:30:47.475128   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:47.475128   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:47.475128   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:47.475128   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:47.475128   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:47 GMT
	I0603 14:30:47.475876   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:47.966900   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:47.966971   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:47.966971   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:47.966971   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:47.970226   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:47.971254   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:47.971254   11176 round_trippers.go:580]     Audit-Id: 771f5205-505b-4f50-9451-029bf490d144
	I0603 14:30:47.971254   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:47.971254   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:47.971254   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:47.971254   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:47.971254   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:47 GMT
	I0603 14:30:47.971254   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:48.478837   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:48.478907   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:48.478907   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:48.478907   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:48.481458   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:48.481458   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:48.481458   11176 round_trippers.go:580]     Audit-Id: e089268d-70f8-41b5-a8a0-a05058cff7a1
	I0603 14:30:48.481458   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:48.481458   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:48.481458   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:48.481458   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:48.481458   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:48 GMT
	I0603 14:30:48.482615   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:48.966310   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:48.966592   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:48.966592   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:48.966592   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:48.969951   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:48.969951   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:48.969951   11176 round_trippers.go:580]     Audit-Id: 450abb2f-c611-41b9-a5db-c00150069dd9
	I0603 14:30:48.969951   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:48.970371   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:48.970371   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:48.970371   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:48.970371   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:48 GMT
	I0603 14:30:48.970371   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:49.467147   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:49.467147   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.467147   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.467147   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.471847   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:49.471847   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.471847   11176 round_trippers.go:580]     Audit-Id: 42b6e386-3f51-4bff-a921-63d690448d16
	I0603 14:30:49.472139   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.472139   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.472139   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.472139   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.472139   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:49 GMT
	I0603 14:30:49.472472   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"630","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0603 14:30:49.473106   11176 node_ready.go:53] node "multinode-720500-m02" has status "Ready":"False"
	I0603 14:30:49.969954   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:49.970030   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.970030   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.970030   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.973476   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:49.973476   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.973476   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:49 GMT
	I0603 14:30:49.973476   11176 round_trippers.go:580]     Audit-Id: 3ade1388-2dba-47b5-9e5c-da5de8fef3d0
	I0603 14:30:49.973476   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.973476   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.973476   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.973476   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.974678   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"649","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0603 14:30:49.975332   11176 node_ready.go:49] node "multinode-720500-m02" has status "Ready":"True"
	I0603 14:30:49.975428   11176 node_ready.go:38] duration metric: took 18.5095902s for node "multinode-720500-m02" to be "Ready" ...
	I0603 14:30:49.975428   11176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:30:49.975578   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods
	I0603 14:30:49.975578   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.975578   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.975578   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.980845   11176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:30:49.980845   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.980845   11176 round_trippers.go:580]     Audit-Id: 9dc22c65-9192-4b80-a55d-d5a684e8b57e
	I0603 14:30:49.980925   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.980925   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.980925   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.980925   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.980925   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:49 GMT
	I0603 14:30:49.983986   11176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"649"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"447","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0603 14:30:49.987610   11176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:49.987859   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:30:49.987902   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.987937   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.987937   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.990404   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:49.991503   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.991503   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.991503   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.991503   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:49 GMT
	I0603 14:30:49.991503   11176 round_trippers.go:580]     Audit-Id: 18704b4a-4b63-49cc-aa8d-c73efeaeccc0
	I0603 14:30:49.991503   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.991503   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.991503   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"447","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0603 14:30:49.992133   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:49.992133   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.992133   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.992133   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.994605   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:49.994605   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.994605   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.994605   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.994605   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:49 GMT
	I0603 14:30:49.994605   11176 round_trippers.go:580]     Audit-Id: 0c8ebeb8-18b9-4a55-b25b-2c5ac7e946e0
	I0603 14:30:49.994605   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.994605   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.994605   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:49.995422   11176 pod_ready.go:92] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:49.995486   11176 pod_ready.go:81] duration metric: took 7.7601ms for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:49.995486   11176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:49.995550   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:30:49.995612   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.995612   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.995612   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:49.998346   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:49.998346   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:49.998346   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:49.998346   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:49.998346   11176 round_trippers.go:580]     Audit-Id: 3e9dcb11-2be7-4806-9994-80f1eb108130
	I0603 14:30:49.998346   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:49.998346   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:49.998346   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:49.998346   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09","resourceVersion":"298","creationTimestamp":"2024-06-03T14:27:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.150.195:2379","kubernetes.io/config.hash":"36433239452f37b4b0410f69c12da408","kubernetes.io/config.mirror":"36433239452f37b4b0410f69c12da408","kubernetes.io/config.seen":"2024-06-03T14:27:10.068477252Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0603 14:30:49.998346   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:49.998346   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:49.998346   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:49.998346   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.001391   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:50.001795   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.001795   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.001872   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.001872   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.001967   11176 round_trippers.go:580]     Audit-Id: 267c2db4-0d62-4b52-8247-8fc8b0e1d6dc
	I0603 14:30:50.001967   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.002040   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.002171   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:50.002951   11176 pod_ready.go:92] pod "etcd-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:50.002951   11176 pod_ready.go:81] duration metric: took 7.4648ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.002951   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.002951   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-720500
	I0603 14:30:50.002951   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.002951   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.002951   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.013270   11176 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:30:50.013363   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.013363   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.013363   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.013363   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.013363   11176 round_trippers.go:580]     Audit-Id: 314e3af0-0e33-4c2d-af81-9af512684c42
	I0603 14:30:50.013363   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.013363   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.013363   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-720500","namespace":"kube-system","uid":"aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef","resourceVersion":"301","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.150.195:8443","kubernetes.io/config.hash":"2dc25f3659bb9b137f23bf9424dba20e","kubernetes.io/config.mirror":"2dc25f3659bb9b137f23bf9424dba20e","kubernetes.io/config.seen":"2024-06-03T14:27:18.382155538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0603 14:30:50.013959   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:50.013959   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.013959   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.013959   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.016566   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:50.016566   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.016566   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.016566   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.016566   11176 round_trippers.go:580]     Audit-Id: e00ed163-c371-4d3b-8334-952ca94a715e
	I0603 14:30:50.017024   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.017024   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.017024   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.017209   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:50.017701   11176 pod_ready.go:92] pod "kube-apiserver-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:50.017779   11176 pod_ready.go:81] duration metric: took 14.8285ms for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.017779   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.017899   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:30:50.017899   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.017959   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.017959   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.020467   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:50.020467   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.020467   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.020467   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.020467   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.020467   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.020467   11176 round_trippers.go:580]     Audit-Id: 19cff697-1a65-4ee7-9aad-b0d39c25f37e
	I0603 14:30:50.020910   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.021225   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"324","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0603 14:30:50.022225   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:50.022225   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.022225   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.022225   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.025808   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:50.025808   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.026853   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.026853   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.026879   11176 round_trippers.go:580]     Audit-Id: 9c35c664-9425-466c-a456-7a7e4f451a77
	I0603 14:30:50.026879   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.026879   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.026879   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.027024   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:50.027455   11176 pod_ready.go:92] pod "kube-controller-manager-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:50.027520   11176 pod_ready.go:81] duration metric: took 9.7403ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.027520   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.172436   11176 request.go:629] Waited for 144.6669ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:30:50.172632   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:30:50.172632   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.172632   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.172632   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.175421   11176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:30:50.175421   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.175421   11176 round_trippers.go:580]     Audit-Id: e685ce29-f60d-499a-9b94-b6252a6bd5a1
	I0603 14:30:50.176425   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.176425   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.176425   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.176425   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.176425   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.176548   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"406","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0603 14:30:50.377437   11176 request.go:629] Waited for 199.6668ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:50.377534   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:50.377534   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.377534   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.377783   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.381855   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:50.381855   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.382209   11176 round_trippers.go:580]     Audit-Id: a88e17e1-94e5-4a44-89fe-5c19e08f7397
	I0603 14:30:50.382209   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.382209   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.382209   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.382209   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.382209   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.382633   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:50.383173   11176 pod_ready.go:92] pod "kube-proxy-64l9x" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:50.383173   11176 pod_ready.go:81] duration metric: took 355.65ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.383173   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.580533   11176 request.go:629] Waited for 197.1392ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:30:50.580623   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:30:50.580623   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.580623   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.580708   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.585050   11176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:30:50.585050   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.585050   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.585050   11176 round_trippers.go:580]     Audit-Id: aa1ea351-7125-4c05-8a4f-ff3c09eb04c3
	I0603 14:30:50.585884   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.585884   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.585884   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.585884   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.586032   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sm9rr","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f0321c0-f47d-463e-bda2-919f37735748","resourceVersion":"635","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0603 14:30:50.782116   11176 request.go:629] Waited for 195.6032ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:50.782377   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:30:50.782377   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.782377   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.782377   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.785656   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:50.785656   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.785656   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.785656   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.786610   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.786610   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.786610   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.786610   11176 round_trippers.go:580]     Audit-Id: 2c0a4abd-c13e-465a-911c-60358a28d290
	I0603 14:30:50.786875   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"649","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0603 14:30:50.787848   11176 pod_ready.go:92] pod "kube-proxy-sm9rr" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:50.787848   11176 pod_ready.go:81] duration metric: took 404.6724ms for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.787848   11176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:50.984445   11176 request.go:629] Waited for 196.2631ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:30:50.984530   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:30:50.984530   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:50.984530   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:50.984530   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:50.988234   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:50.988234   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:50.988234   11176 round_trippers.go:580]     Audit-Id: f0502e42-36f1-49b3-91c9-e2f143871191
	I0603 14:30:50.988234   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:50.988234   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:50.988234   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:50.988234   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:50.988234   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:50 GMT
	I0603 14:30:50.989023   11176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"322","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0603 14:30:51.170896   11176 request.go:629] Waited for 180.1588ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:51.171137   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes/multinode-720500
	I0603 14:30:51.171231   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:51.171231   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:51.171231   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:51.174720   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:51.174855   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:51.174855   11176 round_trippers.go:580]     Audit-Id: 04e84125-62eb-4bb1-8542-63db84a55a19
	I0603 14:30:51.174855   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:51.174855   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:51.174855   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:51.174855   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:51.174855   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:51 GMT
	I0603 14:30:51.175070   11176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0603 14:30:51.175646   11176 pod_ready.go:92] pod "kube-scheduler-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:30:51.175646   11176 pod_ready.go:81] duration metric: took 387.7093ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:30:51.175742   11176 pod_ready.go:38] duration metric: took 1.2002081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:30:51.175742   11176 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 14:30:51.187879   11176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:30:51.214015   11176 system_svc.go:56] duration metric: took 38.273ms WaitForService to wait for kubelet
	I0603 14:30:51.214089   11176 kubeadm.go:576] duration metric: took 20.0081972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:30:51.214153   11176 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:30:51.375393   11176 request.go:629] Waited for 160.9709ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.150.195:8443/api/v1/nodes
	I0603 14:30:51.375393   11176 round_trippers.go:463] GET https://172.22.150.195:8443/api/v1/nodes
	I0603 14:30:51.375393   11176 round_trippers.go:469] Request Headers:
	I0603 14:30:51.375651   11176 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:30:51.375651   11176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:30:51.379537   11176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:30:51.379537   11176 round_trippers.go:577] Response Headers:
	I0603 14:30:51.379537   11176 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:30:51 GMT
	I0603 14:30:51.379537   11176 round_trippers.go:580]     Audit-Id: dac45515-f038-4b6e-a0bc-de8ed468286a
	I0603 14:30:51.379537   11176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:30:51.379537   11176 round_trippers.go:580]     Content-Type: application/json
	I0603 14:30:51.379537   11176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:30:51.379537   11176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:30:51.380711   11176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"652"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"457","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0603 14:30:51.381581   11176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:30:51.381581   11176 node_conditions.go:123] node cpu capacity is 2
	I0603 14:30:51.381581   11176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:30:51.381581   11176 node_conditions.go:123] node cpu capacity is 2
	I0603 14:30:51.381581   11176 node_conditions.go:105] duration metric: took 167.4263ms to run NodePressure ...
	I0603 14:30:51.381581   11176 start.go:240] waiting for startup goroutines ...
	I0603 14:30:51.381581   11176 start.go:254] writing updated cluster config ...
	I0603 14:30:51.394333   11176 ssh_runner.go:195] Run: rm -f paused
	I0603 14:30:51.535834   11176 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 14:30:51.542797   11176 out.go:177] * Done! kubectl is now configured to use "multinode-720500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 03 14:27:43 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:43.690000139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:43 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:43.724486632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 14:27:43 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:43.724545432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 14:27:43 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:43.724576132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:43 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:43.724816233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:43 multinode-720500 cri-dockerd[1227]: time="2024-06-03T14:27:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea/resolv.conf as [nameserver 172.22.144.1]"
	Jun 03 14:27:43 multinode-720500 cri-dockerd[1227]: time="2024-06-03T14:27:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a/resolv.conf as [nameserver 172.22.144.1]"
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.126023333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.126422233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.126592733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.127391232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.237366495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.237624895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.237672295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:27:44 multinode-720500 dockerd[1325]: time="2024-06-03T14:27:44.238297594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:31:17 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:17.191339449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 14:31:17 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:17.191516051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 14:31:17 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:17.191624852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:31:17 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:17.191972757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:31:17 multinode-720500 cri-dockerd[1227]: time="2024-06-03T14:31:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 03 14:31:18 multinode-720500 cri-dockerd[1227]: time="2024-06-03T14:31:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 03 14:31:18 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:18.969531593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 03 14:31:18 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:18.969661993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 03 14:31:18 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:18.969681793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 03 14:31:18 multinode-720500 dockerd[1325]: time="2024-06-03T14:31:18.971247094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	68e49c3e6ddaa       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	097ab9a9a33bb       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   38b548c7f1050       storage-provisioner
	ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   91df341636e89       kindnet-26s27
	3823f2e2bdb28       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	dcd798ff8a466       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   bf3e168388187       kube-apiserver-multinode-720500
	5185046feae6a       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   7dbe33ccede83       etcd-multinode-720500
	63a6ebee2e836       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	ec3860b2bb3ef       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	
	
	==> coredns [68e49c3e6dda] <==
	[INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	[INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	[INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	[INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	[INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	[INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	[INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	[INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	[INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	[INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	[INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	[INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	[INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	[INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	[INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	[INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	[INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	[INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	[INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	[INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	[INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	[INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	[INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	[INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	
	
	==> describe nodes <==
	Name:               multinode-720500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-720500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-720500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-720500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:32:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:31:54 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:31:54 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:31:54 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:31:54 +0000   Mon, 03 Jun 2024 14:27:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.150.195
	  Hostname:    multinode-720500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c833140bb8149249a5f94349af9c27e
	  System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	  Boot ID:                    0220ffe2-183f-452f-a1dc-c54898ef24ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n2t5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m35s
	  kube-system                 etcd-multinode-720500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-26s27                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m36s
	  kube-system                 kube-apiserver-multinode-720500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-multinode-720500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-64l9x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-multinode-720500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m50s                  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s                  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s                  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m37s                  node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	  Normal  NodeReady                4m25s                  kubelet          Node multinode-720500 status is now: NodeReady
	
	
	Name:               multinode-720500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-720500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-720500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-720500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:32:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:31:31 +0000   Mon, 03 Jun 2024 14:30:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:31:31 +0000   Mon, 03 Jun 2024 14:30:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:31:31 +0000   Mon, 03 Jun 2024 14:30:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:31:31 +0000   Mon, 03 Jun 2024 14:30:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.146.196
	  Hostname:    multinode-720500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	  System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	  Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mjhcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-fmfz2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      98s
	  kube-system                 kube-proxy-sm9rr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  NodeHasSufficientMemory  98s (x2 over 98s)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x2 over 98s)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x2 over 98s)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	  Normal  NodeReady                79s                kubelet          Node multinode-720500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 14:26] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.162583] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[ +31.370220] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.117248] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.572702] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.197555] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.262107] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.810093] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.239444] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.213157] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.272380] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[ +11.241633] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +0.108049] kauditd_printk_skb: 205 callbacks suppressed
	[Jun 3 14:27] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[  +6.893968] systemd-fstab-generator[1709]: Ignoring "noauto" option for root device
	[  +0.104394] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.536851] systemd-fstab-generator[2109]: Ignoring "noauto" option for root device
	[  +0.161997] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.009497] systemd-fstab-generator[2309]: Ignoring "noauto" option for root device
	[  +0.220007] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.579279] kauditd_printk_skb: 51 callbacks suppressed
	[Jun 3 14:30] hrtimer: interrupt took 3084864 ns
	[Jun 3 14:31] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [5185046feae6] <==
	{"level":"info","ts":"2024-06-03T14:27:13.026222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T14:27:13.026438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T14:27:13.026591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 1"}
	{"level":"info","ts":"2024-06-03T14:27:13.026765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T14:27:13.026887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 2"}
	{"level":"info","ts":"2024-06-03T14:27:13.027262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 2"}
	{"level":"info","ts":"2024-06-03T14:27:13.02739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 2"}
	{"level":"info","ts":"2024-06-03T14:27:13.034297Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:27:13.040483Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.150.195:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T14:27:13.04065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T14:27:13.042492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T14:27:13.048215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T14:27:13.048343Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T14:27:13.052306Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.150.195:2379"}
	{"level":"info","ts":"2024-06-03T14:27:13.053527Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:27:13.053849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:27:13.054113Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:27:13.054826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-03T14:27:40.21767Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.37706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-720500\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-06-03T14:27:40.218359Z","caller":"traceutil/trace.go:171","msg":"trace[1398706577] range","detail":"{range_begin:/registry/minions/multinode-720500; range_end:; response_count:1; response_revision:411; }","duration":"189.099963ms","start":"2024-06-03T14:27:40.029242Z","end":"2024-06-03T14:27:40.218342Z","steps":["trace[1398706577] 'range keys from in-memory index tree'  (duration: 188.26746ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T14:30:24.158999Z","caller":"traceutil/trace.go:171","msg":"trace[1161298695] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"250.462258ms","start":"2024-06-03T14:30:23.908518Z","end":"2024-06-03T14:30:24.158981Z","steps":["trace[1161298695] 'process raft request'  (duration: 250.229252ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T14:30:40.952792Z","caller":"traceutil/trace.go:171","msg":"trace[19047834] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"181.290927ms","start":"2024-06-03T14:30:40.771471Z","end":"2024-06-03T14:30:40.952762Z","steps":["trace[19047834] 'process raft request'  (duration: 180.652213ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T14:30:40.959832Z","caller":"traceutil/trace.go:171","msg":"trace[124321855] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"153.878848ms","start":"2024-06-03T14:30:40.805944Z","end":"2024-06-03T14:30:40.959822Z","steps":["trace[124321855] 'process raft request'  (duration: 153.737245ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T14:30:46.620692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.763332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-720500-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-03T14:30:46.620864Z","caller":"traceutil/trace.go:171","msg":"trace[1441654588] range","detail":"{range_begin:/registry/minions/multinode-720500-m02; range_end:; response_count:1; response_revision:640; }","duration":"146.068738ms","start":"2024-06-03T14:30:46.474778Z","end":"2024-06-03T14:30:46.620847Z","steps":["trace[1441654588] 'range keys from in-memory index tree'  (duration: 145.460926ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:32:08 up 7 min,  0 users,  load average: 0.39, 0.39, 0.20
	Linux multinode-720500 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ab840a6a9856] <==
	I0603 14:31:02.000107       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:31:12.014147       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:31:12.014673       1 main.go:227] handling current node
	I0603 14:31:12.014990       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:31:12.015200       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:31:22.020572       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:31:22.020688       1 main.go:227] handling current node
	I0603 14:31:22.020702       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:31:22.020713       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:31:32.038943       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:31:32.043270       1 main.go:227] handling current node
	I0603 14:31:32.043395       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:31:32.043430       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:31:42.049338       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:31:42.049489       1 main.go:227] handling current node
	I0603 14:31:42.049504       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:31:42.049512       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:31:52.056801       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:31:52.057255       1 main.go:227] handling current node
	I0603 14:31:52.057332       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:31:52.057380       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:32:02.066225       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:32:02.066265       1 main.go:227] handling current node
	I0603 14:32:02.066278       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:32:02.066284       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [dcd798ff8a46] <==
	I0603 14:27:15.978759       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0603 14:27:15.987708       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0603 14:27:15.987738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:27:17.177846       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:27:17.295467       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:27:17.501668       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0603 14:27:17.527115       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.150.195]
	I0603 14:27:17.528781       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:27:17.559733       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:27:18.081706       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:27:18.369687       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:27:18.430707       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 14:27:18.449541       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:27:32.762703       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 14:27:32.859937       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0603 14:31:23.017487       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62679: use of closed network connection
	E0603 14:31:23.559331       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62681: use of closed network connection
	E0603 14:31:24.109591       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62683: use of closed network connection
	E0603 14:31:24.614132       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62685: use of closed network connection
	E0603 14:31:25.117403       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62687: use of closed network connection
	E0603 14:31:25.625363       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62689: use of closed network connection
	E0603 14:31:26.569871       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62692: use of closed network connection
	E0603 14:31:37.096224       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62694: use of closed network connection
	E0603 14:31:37.605061       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62697: use of closed network connection
	E0603 14:31:48.138689       1 conn.go:339] Error on socket receive: read tcp 172.22.150.195:8443->172.22.144.1:62699: use of closed network connection
	
	
	==> kube-controller-manager [63a6ebee2e83] <==
	I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	
	
	==> kube-proxy [3823f2e2bdb2] <==
	I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ec3860b2bb3e] <==
	W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:27:44 multinode-720500 kubelet[2116]: I0603 14:27:44.620780    2116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podStartSLOduration=11.620761119 podStartE2EDuration="11.620761119s" podCreationTimestamp="2024-06-03 14:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:27:44.61984002 +0000 UTC m=+26.367029876" watchObservedRunningTime="2024-06-03 14:27:44.620761119 +0000 UTC m=+26.367951075"
	Jun 03 14:27:45 multinode-720500 kubelet[2116]: I0603 14:27:45.630686    2116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.630664995 podStartE2EDuration="4.630664995s" podCreationTimestamp="2024-06-03 14:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:27:44.653758778 +0000 UTC m=+26.400948734" watchObservedRunningTime="2024-06-03 14:27:45.630664995 +0000 UTC m=+27.377854851"
	Jun 03 14:28:18 multinode-720500 kubelet[2116]: E0603 14:28:18.473539    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:28:18 multinode-720500 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:28:18 multinode-720500 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:28:18 multinode-720500 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:28:18 multinode-720500 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:29:18 multinode-720500 kubelet[2116]: E0603 14:29:18.472431    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:29:18 multinode-720500 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:29:18 multinode-720500 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:29:18 multinode-720500 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:29:18 multinode-720500 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:30:18 multinode-720500 kubelet[2116]: E0603 14:30:18.473466    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:30:18 multinode-720500 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:30:18 multinode-720500 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:30:18 multinode-720500 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:30:18 multinode-720500 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:31:16 multinode-720500 kubelet[2116]: I0603 14:31:16.617993    2116 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	Jun 03 14:31:16 multinode-720500 kubelet[2116]: I0603 14:31:16.721128    2116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5kjf\" (UniqueName: \"kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf\") pod \"busybox-fc5497c4f-n2t5d\" (UID: \"5a2e152e-3390-4e7e-bcad-d3464a08ffef\") " pod="default/busybox-fc5497c4f-n2t5d"
	Jun 03 14:31:18 multinode-720500 kubelet[2116]: E0603 14:31:18.475808    2116 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:31:18 multinode-720500 kubelet[2116]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:31:18 multinode-720500 kubelet[2116]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:31:18 multinode-720500 kubelet[2116]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:31:18 multinode-720500 kubelet[2116]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:31:19 multinode-720500 kubelet[2116]: I0603 14:31:19.685943    2116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-n2t5d" podStartSLOduration=2.440956802 podStartE2EDuration="3.685787998s" podCreationTimestamp="2024-06-03 14:31:16 +0000 UTC" firstStartedPulling="2024-06-03 14:31:17.447001142 +0000 UTC m=+239.194190998" lastFinishedPulling="2024-06-03 14:31:18.691832338 +0000 UTC m=+240.439022194" observedRunningTime="2024-06-03 14:31:19.685669798 +0000 UTC m=+241.432859754" watchObservedRunningTime="2024-06-03 14:31:19.685787998 +0000 UTC m=+241.432977854"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:32:00.525379   12832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-720500 -n multinode-720500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-720500 -n multinode-720500: (12.3079204s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-720500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (491.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-720500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-720500
E0603 14:48:18.026765   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-720500: (1m38.2506626s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-720500 --wait=true -v=8 --alsologtostderr
E0603 14:48:37.370629   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 14:50:14.781796   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 14:53:37.376944   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-720500 --wait=true -v=8 --alsologtostderr: exit status 1 (5m42.3169746s)

                                                
                                                
-- stdout --
	* [multinode-720500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-720500" primary control-plane node in "multinode-720500" cluster
	* Restarting existing hyperv VM for "multinode-720500" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-720500-m02" worker node in "multinode-720500" cluster
	* Restarting existing hyperv VM for "multinode-720500-m02" ...
	* Found network options:
	  - NO_PROXY=172.22.154.20
	  - NO_PROXY=172.22.154.20
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	  - env NO_PROXY=172.22.154.20

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:48:28.958166    9752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 14:48:29.033726    9752 out.go:291] Setting OutFile to fd 1608 ...
	I0603 14:48:29.034543    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:48:29.034543    9752 out.go:304] Setting ErrFile to fd 1204...
	I0603 14:48:29.034543    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:48:29.059913    9752 out.go:298] Setting JSON to false
	I0603 14:48:29.065561    9752 start.go:129] hostinfo: {"hostname":"minikube3","uptime":27037,"bootTime":1717399071,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 14:48:29.066135    9752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 14:48:29.170301    9752 out.go:177] * [multinode-720500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 14:48:29.228986    9752 notify.go:220] Checking for updates...
	I0603 14:48:29.260718    9752 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:48:29.270991    9752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:48:29.312877    9752 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 14:48:29.323929    9752 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:48:29.359902    9752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:48:29.367166    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:48:29.367549    9752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:48:34.915447    9752 out.go:177] * Using the hyperv driver based on existing profile
	I0603 14:48:34.926221    9752 start.go:297] selected driver: hyperv
	I0603 14:48:34.926282    9752 start.go:901] validating driver "hyperv" against &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:48:34.926282    9752 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:48:34.983615    9752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:48:34.983615    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:48:34.983615    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:48:34.984134    9752 start.go:340] cluster config:
	{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:48:34.984134    9752 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:48:35.116720    9752 out.go:177] * Starting "multinode-720500" primary control-plane node in "multinode-720500" cluster
	I0603 14:48:35.126028    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:48:35.126360    9752 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 14:48:35.126360    9752 cache.go:56] Caching tarball of preloaded images
	I0603 14:48:35.126929    9752 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:48:35.127075    9752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:48:35.127075    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:48:35.129977    9752 start.go:360] acquireMachinesLock for multinode-720500: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:48:35.129977    9752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-720500"
	I0603 14:48:35.130979    9752 start.go:96] Skipping create...Using existing machine configuration
	I0603 14:48:35.130979    9752 fix.go:54] fixHost starting: 
	I0603 14:48:35.131216    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:37.961475    9752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 14:48:37.962232    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:37.962555    9752 fix.go:112] recreateIfNeeded on multinode-720500: state=Stopped err=<nil>
	W0603 14:48:37.962610    9752 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 14:48:37.966652    9752 out.go:177] * Restarting existing hyperv VM for "multinode-720500" ...
	I0603 14:48:37.969729    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:41.039660    9752 main.go:141] libmachine: Waiting for host to start...
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:43.342153    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:43.342904    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:43.342960    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:45.881880    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:45.881880    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:46.884117    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:49.103915    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:49.104037    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:49.104037    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:51.648696    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:51.649337    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:52.656704    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:54.893056    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:54.893056    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:54.893965    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:57.449195    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:57.449195    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:58.454090    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:00.713698    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:00.713919    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:00.713919    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:03.303429    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:49:03.303429    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:04.313395    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:06.563037    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:06.563373    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:06.563373    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:09.121286    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:09.121375    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:09.124435    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:11.266115    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:11.266115    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:11.267086    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:13.790586    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:13.791715    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:13.792040    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:49:13.794642    9752 machine.go:94] provisionDockerMachine start ...
	I0603 14:49:13.794903    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:15.909412    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:15.909412    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:15.909637    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:18.439632    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:18.440518    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:18.446685    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:18.447432    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:18.447432    9752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:49:18.575024    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:49:18.575024    9752 buildroot.go:166] provisioning hostname "multinode-720500"
	I0603 14:49:18.575257    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:20.715549    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:20.716567    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:20.716567    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:23.280598    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:23.280654    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:23.286807    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:23.286975    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:23.286975    9752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500 && echo "multinode-720500" | sudo tee /etc/hostname
	I0603 14:49:23.445247    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500
	
	I0603 14:49:23.445247    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:25.560706    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:25.560706    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:25.561383    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:28.078930    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:28.078930    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:28.084893    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:28.085420    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:28.085420    9752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:49:28.238233    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:49:28.238300    9752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:49:28.238366    9752 buildroot.go:174] setting up certificates
	I0603 14:49:28.238428    9752 provision.go:84] configureAuth start
	I0603 14:49:28.238496    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:30.360753    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:30.360898    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:30.360898    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:35.053432    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:35.053432    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:35.054034    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:37.619479    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:37.619705    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:37.619823    9752 provision.go:143] copyHostCerts
	I0603 14:49:37.619914    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:49:37.620347    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:49:37.620347    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:49:37.620796    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:49:37.622012    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:49:37.622208    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:49:37.622306    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:49:37.622649    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:49:37.623828    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:49:37.624080    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:49:37.624156    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:49:37.624551    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:49:37.625494    9752 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500 san=[127.0.0.1 172.22.154.20 localhost minikube multinode-720500]
	I0603 14:49:37.848064    9752 provision.go:177] copyRemoteCerts
	I0603 14:49:37.860989    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:49:37.860989    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:39.985608    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:39.985608    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:39.985742    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:42.500636    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:42.501485    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:42.501572    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:49:42.606230    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7441646s)
	I0603 14:49:42.606300    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:49:42.606805    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 14:49:42.653354    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:49:42.653354    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:49:42.701189    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:49:42.701189    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 14:49:42.751247    9752 provision.go:87] duration metric: took 14.5126318s to configureAuth
	I0603 14:49:42.751404    9752 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:49:42.752015    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:49:42.752228    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:44.879240    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:44.879240    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:44.880170    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:47.388154    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:47.388154    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:47.395274    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:47.395274    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:47.395274    9752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:49:47.523619    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:49:47.523681    9752 buildroot.go:70] root file system type: tmpfs
	I0603 14:49:47.523900    9752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:49:47.523972    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:49.624987    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:49.625060    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:49.625132    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:52.152605    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:52.153750    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:52.159533    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:52.160219    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:52.160219    9752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:49:52.325685    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:49:52.325810    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:54.446568    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:54.447653    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:54.447653    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:56.946899    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:56.947038    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:56.954307    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:56.955367    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:56.955541    9752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:49:59.453668    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:49:59.453668    9752 machine.go:97] duration metric: took 45.6585468s to provisionDockerMachine
	I0603 14:49:59.453668    9752 start.go:293] postStartSetup for "multinode-720500" (driver="hyperv")
	I0603 14:49:59.453668    9752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:49:59.465656    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:49:59.466651    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:01.597546    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:01.598582    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:01.598623    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:04.123124    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:04.123124    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:04.124085    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:04.232405    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7657143s)
	I0603 14:50:04.247578    9752 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:50:04.255257    9752 command_runner.go:130] > NAME=Buildroot
	I0603 14:50:04.255257    9752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:50:04.255257    9752 command_runner.go:130] > ID=buildroot
	I0603 14:50:04.255257    9752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:50:04.255257    9752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:50:04.255390    9752 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:50:04.255390    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:50:04.256096    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:50:04.256950    9752 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:50:04.256997    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:50:04.272630    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:50:04.294656    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:50:04.342460    9752 start.go:296] duration metric: took 4.8887521s for postStartSetup
	I0603 14:50:04.342460    9752 fix.go:56] duration metric: took 1m29.210749s for fixHost
	I0603 14:50:04.342460    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:06.506928    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:06.506928    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:06.507770    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:08.999719    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:09.000025    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:09.005781    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:50:09.006397    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:50:09.006397    9752 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 14:50:09.147055    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717426209.149042022
	
	I0603 14:50:09.147198    9752 fix.go:216] guest clock: 1717426209.149042022
	I0603 14:50:09.147198    9752 fix.go:229] Guest: 2024-06-03 14:50:09.149042022 +0000 UTC Remote: 2024-06-03 14:50:04.3424603 +0000 UTC m=+95.473466101 (delta=4.806581722s)
	I0603 14:50:09.147338    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:11.257684    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:11.257684    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:11.258609    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:13.800759    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:13.800930    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:13.806913    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:50:13.807365    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:50:13.807365    9752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717426209
	I0603 14:50:13.944040    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:50:09 UTC 2024
	
	I0603 14:50:13.944040    9752 fix.go:236] clock set: Mon Jun  3 14:50:09 UTC 2024
	 (err=<nil>)
	I0603 14:50:13.944040    9752 start.go:83] releasing machines lock for "multinode-720500", held for 1m38.813253s
	I0603 14:50:13.944568    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:16.056880    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:16.057247    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:16.057383    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:18.573159    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:18.573287    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:18.577870    9752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:50:18.577959    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:18.588715    9752 ssh_runner.go:195] Run: cat /version.json
	I0603 14:50:18.588715    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:20.782890    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:20.782890    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:20.783064    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:23.480985    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:23.481273    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:23.481273    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:23.499831    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:23.500315    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:23.500489    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:23.664510    9752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:50:23.664510    9752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0865094s)
	I0603 14:50:23.664510    9752 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 14:50:23.664868    9752 ssh_runner.go:235] Completed: cat /version.json: (5.0761106s)
	I0603 14:50:23.676417    9752 ssh_runner.go:195] Run: systemctl --version
	I0603 14:50:23.685755    9752 command_runner.go:130] > systemd 252 (252)
	I0603 14:50:23.685942    9752 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 14:50:23.698723    9752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:50:23.707730    9752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 14:50:23.708130    9752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:50:23.718836    9752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:50:23.745447    9752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:50:23.746088    9752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:50:23.746088    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:50:23.746413    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:50:23.779239    9752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:50:23.791357    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:50:23.821391    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:50:23.839481    9752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:50:23.852034    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:50:23.881821    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:50:23.915768    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:50:23.946659    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:50:23.977991    9752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:50:24.007673    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:50:24.039790    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:50:24.079146    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:50:24.111707    9752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:50:24.130086    9752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:50:24.142239    9752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:50:24.178614    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:24.387612    9752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:50:24.419480    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:50:24.432571    9752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:50:24.454094    9752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:50:24.454094    9752 command_runner.go:130] > [Unit]
	I0603 14:50:24.454094    9752 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:50:24.454094    9752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:50:24.454403    9752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:50:24.454403    9752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:50:24.454403    9752 command_runner.go:130] > StartLimitBurst=3
	I0603 14:50:24.454465    9752 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:50:24.454465    9752 command_runner.go:130] > [Service]
	I0603 14:50:24.454465    9752 command_runner.go:130] > Type=notify
	I0603 14:50:24.454465    9752 command_runner.go:130] > Restart=on-failure
	I0603 14:50:24.454465    9752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:50:24.454465    9752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:50:24.454465    9752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:50:24.454465    9752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:50:24.454465    9752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:50:24.454465    9752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:50:24.454465    9752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecStart=
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:50:24.454465    9752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitCORE=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:50:24.454465    9752 command_runner.go:130] > TasksMax=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:50:24.454465    9752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:50:24.455042    9752 command_runner.go:130] > Delegate=yes
	I0603 14:50:24.455150    9752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:50:24.455150    9752 command_runner.go:130] > KillMode=process
	I0603 14:50:24.455150    9752 command_runner.go:130] > [Install]
	I0603 14:50:24.455150    9752 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:50:24.468304    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:50:24.503178    9752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:50:24.542792    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:50:24.577927    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:50:24.612015    9752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:50:24.671151    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:50:24.691092    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:50:24.723859    9752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:50:24.738187    9752 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:50:24.744529    9752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:50:24.755198    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:50:24.773151    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:50:24.816336    9752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:50:25.023790    9752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:50:25.225274    9752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:50:25.225549    9752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:50:25.270969    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:25.473279    9752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:50:28.102687    9752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.628383s)
	I0603 14:50:28.114992    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:50:28.156703    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:50:28.193229    9752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:50:28.396266    9752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:50:28.611450    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:28.808534    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:50:28.848776    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:50:28.884709    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:29.087319    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:50:29.201633    9752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:50:29.214914    9752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:50:29.223057    9752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:50:29.223116    9752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:50:29.223153    9752 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0603 14:50:29.223153    9752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:50:29.223153    9752 command_runner.go:130] > Access: 2024-06-03 14:50:29.114679823 +0000
	I0603 14:50:29.223153    9752 command_runner.go:130] > Modify: 2024-06-03 14:50:29.114679823 +0000
	I0603 14:50:29.223223    9752 command_runner.go:130] > Change: 2024-06-03 14:50:29.119679828 +0000
	I0603 14:50:29.223223    9752 command_runner.go:130] >  Birth: -
	I0603 14:50:29.223282    9752 start.go:562] Will wait 60s for crictl version
	I0603 14:50:29.235862    9752 ssh_runner.go:195] Run: which crictl
	I0603 14:50:29.242226    9752 command_runner.go:130] > /usr/bin/crictl
	I0603 14:50:29.253215    9752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:50:29.307257    9752 command_runner.go:130] > Version:  0.1.0
	I0603 14:50:29.307340    9752 command_runner.go:130] > RuntimeName:  docker
	I0603 14:50:29.307340    9752 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:50:29.307381    9752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:50:29.307381    9752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:50:29.317342    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:50:29.349500    9752 command_runner.go:130] > 26.0.2
	I0603 14:50:29.359517    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:50:29.389620    9752 command_runner.go:130] > 26.0.2
	I0603 14:50:29.394562    9752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:50:29.394562    9752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:50:29.401870    9752 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:50:29.401870    9752 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:50:29.416773    9752 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:50:29.423378    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:50:29.444808    9752 kubeadm.go:877] updating cluster {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 14:50:29.445780    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:50:29.455433    9752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:50:29.479242    9752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 14:50:29.479903    9752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 14:50:29.479903    9752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 14:50:29.479903    9752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:50:29.479903    9752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 14:50:29.480099    9752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 14:50:29.480194    9752 docker.go:615] Images already preloaded, skipping extraction
	I0603 14:50:29.490256    9752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:50:29.515638    9752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 14:50:29.515688    9752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 14:50:29.515688    9752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 14:50:29.515755    9752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 14:50:29.515819    9752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 14:50:29.515819    9752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:50:29.515819    9752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 14:50:29.515885    9752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 14:50:29.515925    9752 cache_images.go:84] Images are preloaded, skipping loading
	I0603 14:50:29.515992    9752 kubeadm.go:928] updating node { 172.22.154.20 8443 v1.30.1 docker true true} ...
	I0603 14:50:29.516257    9752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.154.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:50:29.526981    9752 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 14:50:29.557673    9752 command_runner.go:130] > cgroupfs
	I0603 14:50:29.559006    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:50:29.559006    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:50:29.559072    9752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 14:50:29.559127    9752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.154.20 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-720500 NodeName:multinode-720500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.154.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.154.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 14:50:29.559289    9752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.154.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-720500"
	  kubeletExtraArgs:
	    node-ip: 172.22.154.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 14:50:29.572579    9752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubeadm
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubectl
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubelet
	I0603 14:50:29.590342    9752 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:50:29.603028    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 14:50:29.619684    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 14:50:29.648429    9752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:50:29.679305    9752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 14:50:29.725797    9752 ssh_runner.go:195] Run: grep 172.22.154.20	control-plane.minikube.internal$ /etc/hosts
	I0603 14:50:29.731212    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.154.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:50:29.762682    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:29.964153    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:50:29.992948    9752 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.154.20
	I0603 14:50:29.993022    9752 certs.go:194] generating shared ca certs ...
	I0603 14:50:29.993022    9752 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:29.993685    9752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:50:29.994104    9752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:50:29.994405    9752 certs.go:256] generating profile certs ...
	I0603 14:50:29.994787    9752 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.key
	I0603 14:50:29.994787    9752 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185
	I0603 14:50:29.995403    9752 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.154.20]
	I0603 14:50:30.282819    9752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 ...
	I0603 14:50:30.282819    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185: {Name:mk3ce09f3dfeb295693de4a303e0d19d5ad4f0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:30.284094    9752 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185 ...
	I0603 14:50:30.284094    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185: {Name:mk72162fc69bc37c51dc41730eaf528bd7879cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:30.290035    9752 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt
	I0603 14:50:30.296118    9752 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key
	I0603 14:50:30.302065    9752 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key
	I0603 14:50:30.302065    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:50:30.302065    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:50:30.302853    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 14:50:30.303446    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 14:50:30.303743    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 14:50:30.304061    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:50:30.304584    9752 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:50:30.304755    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:50:30.304827    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:50:30.304827    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:50:30.305649    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:50:30.306167    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:50:30.306446    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:50:30.306650    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.306844    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:50:30.308384    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:50:30.357242    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:50:30.408052    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:50:30.466550    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:50:30.509530    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 14:50:30.552860    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 14:50:30.598562    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 14:50:30.641657    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 14:50:30.685377    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:50:30.729265    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:50:30.772687    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:50:30.814997    9752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 14:50:30.857563    9752 ssh_runner.go:195] Run: openssl version
	I0603 14:50:30.866181    9752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:50:30.879178    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:50:30.910588    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.917811    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.917919    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.930458    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.938518    9752 command_runner.go:130] > b5213941
	I0603 14:50:30.951780    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:50:30.983814    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:50:31.014838    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.022141    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.022693    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.034123    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.042974    9752 command_runner.go:130] > 51391683
	I0603 14:50:31.055159    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:50:31.091504    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:50:31.122571    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.129679    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.130694    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.142979    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.151940    9752 command_runner.go:130] > 3ec20f2e
	I0603 14:50:31.165559    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:50:31.196576    9752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:50:31.203514    9752 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:50:31.203514    9752 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 14:50:31.203514    9752 command_runner.go:130] > Device: 8,1	Inode: 5243218     Links: 1
	I0603 14:50:31.203514    9752 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 14:50:31.203514    9752 command_runner.go:130] > Access: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] > Modify: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] > Change: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] >  Birth: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.214709    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 14:50:31.223631    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.236029    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 14:50:31.244712    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.256468    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 14:50:31.266297    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.279817    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 14:50:31.289926    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.303055    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 14:50:31.313094    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.326077    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 14:50:31.335901    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.336096    9752 kubeadm.go:391] StartCluster: {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:50:31.346639    9752 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 14:50:31.383771    9752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 14:50:31.402548    9752 command_runner.go:130] > member
	W0603 14:50:31.403604    9752 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 14:50:31.403604    9752 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 14:50:31.403604    9752 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 14:50:31.415631    9752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 14:50:31.433674    9752 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 14:50:31.435767    9752 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-720500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:31.436276    9752 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-720500" cluster setting kubeconfig missing "multinode-720500" context setting]
	I0603 14:50:31.436642    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:31.452263    9752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:31.452912    9752 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.154.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:50:31.454810    9752 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 14:50:31.466380    9752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 14:50:31.489965    9752 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0603 14:50:31.489965    9752 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0603 14:50:31.489965    9752 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 14:50:31.489965    9752 command_runner.go:130] >  kind: InitConfiguration
	I0603 14:50:31.489965    9752 command_runner.go:130] >  localAPIEndpoint:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -  advertiseAddress: 172.22.150.195
	I0603 14:50:31.489965    9752 command_runner.go:130] > +  advertiseAddress: 172.22.154.20
	I0603 14:50:31.489965    9752 command_runner.go:130] >    bindPort: 8443
	I0603 14:50:31.489965    9752 command_runner.go:130] >  bootstrapTokens:
	I0603 14:50:31.489965    9752 command_runner.go:130] >    - groups:
	I0603 14:50:31.489965    9752 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0603 14:50:31.489965    9752 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0603 14:50:31.489965    9752 command_runner.go:130] >    name: "multinode-720500"
	I0603 14:50:31.489965    9752 command_runner.go:130] >    kubeletExtraArgs:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -    node-ip: 172.22.150.195
	I0603 14:50:31.489965    9752 command_runner.go:130] > +    node-ip: 172.22.154.20
	I0603 14:50:31.489965    9752 command_runner.go:130] >    taints: []
	I0603 14:50:31.489965    9752 command_runner.go:130] >  ---
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 14:50:31.489965    9752 command_runner.go:130] >  kind: ClusterConfiguration
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiServer:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.22.150.195"]
	I0603 14:50:31.489965    9752 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	I0603 14:50:31.489965    9752 command_runner.go:130] >    extraArgs:
	I0603 14:50:31.489965    9752 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0603 14:50:31.489965    9752 command_runner.go:130] >  controllerManager:
	I0603 14:50:31.489965    9752 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.22.150.195
	+  advertiseAddress: 172.22.154.20
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-720500"
	   kubeletExtraArgs:
	-    node-ip: 172.22.150.195
	+    node-ip: 172.22.154.20
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.22.150.195"]
	+  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0603 14:50:31.489965    9752 kubeadm.go:1154] stopping kube-system containers ...
	I0603 14:50:31.495744    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 14:50:31.524883    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:50:31.524883    9752 command_runner.go:130] > 097ab9a9a33b
	I0603 14:50:31.524883    9752 command_runner.go:130] > 38b548c7f105
	I0603 14:50:31.524883    9752 command_runner.go:130] > 1ac710138e87
	I0603 14:50:31.524883    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:50:31.524883    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:50:31.524883    9752 command_runner.go:130] > 91df341636e8
	I0603 14:50:31.524883    9752 command_runner.go:130] > 45c98b77811e
	I0603 14:50:31.524883    9752 command_runner.go:130] > dcd798ff8a46
	I0603 14:50:31.524883    9752 command_runner.go:130] > 5185046feae6
	I0603 14:50:31.524883    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:50:31.524883    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:50:31.524883    9752 command_runner.go:130] > 19b3080db261
	I0603 14:50:31.524883    9752 command_runner.go:130] > 73f8312902b0
	I0603 14:50:31.524883    9752 command_runner.go:130] > bf3e16838818
	I0603 14:50:31.524883    9752 command_runner.go:130] > 7dbe33ccede8
	I0603 14:50:31.524883    9752 docker.go:483] Stopping containers: [68e49c3e6dda 097ab9a9a33b 38b548c7f105 1ac710138e87 ab840a6a9856 3823f2e2bdb2 91df341636e8 45c98b77811e dcd798ff8a46 5185046feae6 63a6ebee2e83 ec3860b2bb3e 19b3080db261 73f8312902b0 bf3e16838818 7dbe33ccede8]
	I0603 14:50:31.537637    9752 ssh_runner.go:195] Run: docker stop 68e49c3e6dda 097ab9a9a33b 38b548c7f105 1ac710138e87 ab840a6a9856 3823f2e2bdb2 91df341636e8 45c98b77811e dcd798ff8a46 5185046feae6 63a6ebee2e83 ec3860b2bb3e 19b3080db261 73f8312902b0 bf3e16838818 7dbe33ccede8
	I0603 14:50:31.565425    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:50:31.565568    9752 command_runner.go:130] > 097ab9a9a33b
	I0603 14:50:31.565568    9752 command_runner.go:130] > 38b548c7f105
	I0603 14:50:31.565568    9752 command_runner.go:130] > 1ac710138e87
	I0603 14:50:31.565623    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:50:31.565623    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:50:31.565623    9752 command_runner.go:130] > 91df341636e8
	I0603 14:50:31.565659    9752 command_runner.go:130] > 45c98b77811e
	I0603 14:50:31.565659    9752 command_runner.go:130] > dcd798ff8a46
	I0603 14:50:31.565697    9752 command_runner.go:130] > 5185046feae6
	I0603 14:50:31.565697    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:50:31.565731    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:50:31.565731    9752 command_runner.go:130] > 19b3080db261
	I0603 14:50:31.565731    9752 command_runner.go:130] > 73f8312902b0
	I0603 14:50:31.565731    9752 command_runner.go:130] > bf3e16838818
	I0603 14:50:31.565731    9752 command_runner.go:130] > 7dbe33ccede8
	I0603 14:50:31.578802    9752 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 14:50:31.617716    9752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 14:50:31.635887    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 14:50:31.635887    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 14:50:31.636645    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 14:50:31.636645    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:50:31.636967    9752 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:50:31.637025    9752 kubeadm.go:156] found existing configuration files:
	
	I0603 14:50:31.648483    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 14:50:31.665306    9752 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:50:31.665385    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:50:31.677521    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 14:50:31.709088    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 14:50:31.725891    9752 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:50:31.726839    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:50:31.739642    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 14:50:31.769317    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 14:50:31.786917    9752 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:50:31.787226    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:50:31.800374    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 14:50:31.833312    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 14:50:31.851422    9752 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:50:31.852393    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:50:31.864186    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 14:50:31.894499    9752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 14:50:31.913712    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 14:50:32.213297    9752 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 14:50:32.213345    9752 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 14:50:32.213345    9752 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 14:50:32.213345    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:50:33.401490    9752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1881348s)
	I0603 14:50:33.401490    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 14:50:33.714130    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.794194    9752 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:50:33.794360    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.890515    9752 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:50:33.890515    9752 api_server.go:52] waiting for apiserver process to appear ...
	I0603 14:50:33.903721    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:34.406708    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:34.912875    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.407053    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.907388    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.938200    9752 command_runner.go:130] > 1877
	I0603 14:50:35.938200    9752 api_server.go:72] duration metric: took 2.0476689s to wait for apiserver process to appear ...
	I0603 14:50:35.938200    9752 api_server.go:88] waiting for apiserver healthz status ...
	I0603 14:50:35.938200    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.322888    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 14:50:39.323845    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 14:50:39.323881    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.392354    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 14:50:39.392354    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 14:50:39.445637    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.461120    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:39.461188    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:39.948070    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.964441    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:39.964652    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:40.438860    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:40.450090    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:40.450232    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:40.945934    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:40.953114    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:50:40.954001    9752 round_trippers.go:463] GET https://172.22.154.20:8443/version
	I0603 14:50:40.954077    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:40.954077    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:40.954171    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:40.970045    9752 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 14:50:40.970045    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:40.970045    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:40.970045    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Content-Length: 263
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:40 GMT
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Audit-Id: 768ed4ca-76db-429c-9788-7f3f81fb4cdd
	I0603 14:50:40.970257    9752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 14:50:40.970353    9752 api_server.go:141] control plane version: v1.30.1
	I0603 14:50:40.970460    9752 api_server.go:131] duration metric: took 5.0322185s to wait for apiserver health ...
	I0603 14:50:40.970460    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:50:40.970513    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:50:40.974328    9752 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 14:50:40.988680    9752 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 14:50:41.002893    9752 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 14:50:41.002989    9752 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 14:50:41.002989    9752 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 14:50:41.002989    9752 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 14:50:41.002989    9752 command_runner.go:130] > Access: 2024-06-03 14:49:06.725646200 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] > Change: 2024-06-03 14:48:56.608000000 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] >  Birth: -
	I0603 14:50:41.002989    9752 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 14:50:41.003157    9752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 14:50:41.100030    9752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 14:50:42.138239    9752 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > daemonset.apps/kindnet configured
	I0603 14:50:42.138528    9752 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0384887s)
	I0603 14:50:42.138636    9752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 14:50:42.138837    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:50:42.138872    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.138872    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.138872    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.149280    9752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:50:42.149639    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Audit-Id: 7117e1ad-541b-4bc1-ba2a-030ea5d6cdd6
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.149639    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.149701    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.150979    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1818"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 79260 chars]
	I0603 14:50:42.157663    9752 system_pods.go:59] 11 kube-system pods found
	I0603 14:50:42.157663    9752 system_pods.go:61] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 14:50:42.157663    9752 system_pods.go:61] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 14:50:42.157663    9752 system_pods.go:61] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 14:50:42.157663    9752 system_pods.go:74] duration metric: took 19.0038ms to wait for pod list to return data ...
	I0603 14:50:42.157663    9752 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:50:42.158251    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes
	I0603 14:50:42.158251    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.158251    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.158304    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.168418    9752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:50:42.168418    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.168418    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.168418    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Audit-Id: 6b446131-60ee-4ac0-982b-a319a74780bc
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.168418    9752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1818"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0603 14:50:42.170628    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170710    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170743    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170743    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:105] duration metric: took 13.0797ms to run NodePressure ...
	I0603 14:50:42.170794    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:42.550050    9752 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 14:50:42.550804    9752 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 14:50:42.550804    9752 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 14:50:42.550921    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0603 14:50:42.550921    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.550921    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.550921    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.572447    9752 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0603 14:50:42.572548    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.572548    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.572548    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.572548    9752 round_trippers.go:580]     Audit-Id: a94334cf-c1d1-4564-a53e-1dce5487adff
	I0603 14:50:42.572611    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.572649    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.572649    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.572785    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1824"},"items":[{"metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1805","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 21600 chars]
	I0603 14:50:42.574536    9752 kubeadm.go:733] kubelet initialised
	I0603 14:50:42.574646    9752 kubeadm.go:734] duration metric: took 23.8059ms waiting for restarted kubelet to initialise ...
	I0603 14:50:42.574646    9752 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:50:42.574797    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:50:42.574814    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.574850    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.574850    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.586083    9752 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 14:50:42.586310    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.586310    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.586310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.586310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.586310    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.586444    9752 round_trippers.go:580]     Audit-Id: bea86d3d-08ff-485f-a162-fcaf18e76504
	I0603 14:50:42.586444    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.588124    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1824"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 78667 chars]
	I0603 14:50:42.593888    9752 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.593888    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:50:42.593888    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.593888    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.593888    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.595656    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:50:42.595656    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Audit-Id: 3ca27f6b-0589-4bdb-bf10-84150c54e1ec
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.595656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.595656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.596864    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:50:42.597540    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.597660    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.597660    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.597660    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.600170    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.600170    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.600170    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.600170    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Audit-Id: a5ad8d3a-7b10-4b4a-9613-05eb4bc81cd7
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.601001    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.601327    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.601784    9752 pod_ready.go:97] node "multinode-720500" hosting pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.601865    9752 pod_ready.go:81] duration metric: took 7.9771ms for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.601865    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.601865    9752 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.601974    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:50:42.602049    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.602049    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.602049    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.604314    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.604314    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.604314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Audit-Id: d94e13bb-e31d-48d0-ab47-53ba905d0d78
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.604718    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.604932    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1805","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0603 14:50:42.605459    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.605541    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.605541    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.605541    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.607964    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.607964    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.607964    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Audit-Id: a32951b2-e900-45b0-be5b-bd4000db1513
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.607964    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.608962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.609589    9752 pod_ready.go:97] node "multinode-720500" hosting pod "etcd-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.609589    9752 pod_ready.go:81] duration metric: took 7.7238ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.609589    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "etcd-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.609589    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.609589    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:50:42.609589    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.609589    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.609589    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.618388    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:50:42.618388    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.618388    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Audit-Id: 808aabe5-a24b-413d-bc45-d73038d43a59
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.618388    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.619159    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"1804","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0603 14:50:42.619409    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.619409    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.619409    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.619409    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.626215    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:42.626215    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.626215    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.626215    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Audit-Id: 1666dac5-4137-4733-8784-b21b0e7c81fc
	I0603 14:50:42.627001    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.627144    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-controller-manager-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.627144    9752 pod_ready.go:81] duration metric: took 17.5546ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.627144    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-controller-manager-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.627144    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.627144    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:50:42.627144    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.627144    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.627144    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.630018    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.630018    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.630018    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.630018    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.630018    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.630018    9752 round_trippers.go:580]     Audit-Id: 67c3b156-7901-4bb3-944a-ce49294335f6
	I0603 14:50:42.630539    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.630539    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.631184    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"1822","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0603 14:50:42.631756    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.631756    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.631756    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.631756    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.650970    9752 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0603 14:50:42.651331    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.651331    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.651331    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Audit-Id: 208ca559-880b-4c23-8d04-e71bf1f3f323
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.651413    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.651493    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.652095    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-proxy-64l9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.652156    9752 pod_ready.go:81] duration metric: took 25.0122ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.652156    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-proxy-64l9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.652156    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.761670    9752 request.go:629] Waited for 109.5131ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:50:42.762030    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:50:42.762117    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.762117    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.762117    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.766688    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:42.766899    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Audit-Id: 87f222ea-bd14-44a6-b1de-7fe3972342f5
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.767010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.767010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.767303    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ctm5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"38069b1b-8ba9-46af-b4e7-7add5d9c67fc","resourceVersion":"1761","creationTimestamp":"2024-06-03T14:35:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:50:42.964358    9752 request.go:629] Waited for 196.0468ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:50:42.964358    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:50:42.964358    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.964358    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.964358    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.969724    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:42.969724    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.969724    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Audit-Id: 6716eae3-c43e-4b96-a6ac-6b25a3d3c482
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.969724    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.972028    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m03","uid":"daf03ea9-c0d0-4565-9ad8-44cd4fce8e19","resourceVersion":"1770","creationTimestamp":"2024-06-03T14:46:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_46_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0603 14:50:42.972210    9752 pod_ready.go:97] node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:50:42.972210    9752 pod_ready.go:81] duration metric: took 320.0513ms for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.972210    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:50:42.972210    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:43.151706    9752 request.go:629] Waited for 178.7035ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:50:43.152034    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:50:43.152034    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.152034    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.152034    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.159849    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:50:43.159849    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Audit-Id: 7fe22f0d-acfb-4e87-aa89-658d771551f9
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.159849    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.159849    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.159849    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sm9rr","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f0321c0-f47d-463e-bda2-919f37735748","resourceVersion":"1786","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:50:43.353269    9752 request.go:629] Waited for 192.6144ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:50:43.353531    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:50:43.353531    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.353609    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.353609    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.360310    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:43.360310    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.360310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.360310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Audit-Id: 664327f0-76ca-48b2-9002-d728662e98e4
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.360310    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"1785","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4486 chars]
	I0603 14:50:43.361096    9752 pod_ready.go:97] node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:50:43.361096    9752 pod_ready.go:81] duration metric: took 388.8828ms for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:43.361096    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:50:43.361096    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:43.555552    9752 request.go:629] Waited for 194.4545ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:50:43.555908    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:50:43.556042    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.556042    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.556042    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.559377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:43.559655    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Audit-Id: 43d57b5b-de71-46e9-9856-5ce7d54e6b4a
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.559655    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.559772    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.559772    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.559911    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"1802","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0603 14:50:43.758561    9752 request.go:629] Waited for 197.4939ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:43.758650    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:43.758650    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.758880    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.758880    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.762595    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:43.762595    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.762595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.762595    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Audit-Id: 1049f630-b549-4500-960c-545477b71ae6
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.762802    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.763290    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:43.763859    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-scheduler-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:43.763859    9752 pod_ready.go:81] duration metric: took 402.7594ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:43.763929    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-scheduler-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:43.763929    9752 pod_ready.go:38] duration metric: took 1.1892736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:50:43.763929    9752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 14:50:43.783079    9752 command_runner.go:130] > -16
	I0603 14:50:43.783079    9752 ops.go:34] apiserver oom_adj: -16
	I0603 14:50:43.783079    9752 kubeadm.go:591] duration metric: took 12.3793736s to restartPrimaryControlPlane
	I0603 14:50:43.783079    9752 kubeadm.go:393] duration metric: took 12.4468804s to StartCluster
	I0603 14:50:43.783079    9752 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:43.783634    9752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:43.786229    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:43.788934    9752 start.go:234] Will wait 6m0s for node &{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 14:50:43.788934    9752 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 14:50:43.793634    9752 out.go:177] * Verifying Kubernetes components...
	I0603 14:50:43.788934    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:50:43.798082    9752 out.go:177] * Enabled addons: 
	I0603 14:50:43.801075    9752 addons.go:510] duration metric: took 12.1411ms for enable addons: enabled=[]
	I0603 14:50:43.808206    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:44.080025    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:50:44.114641    9752 node_ready.go:35] waiting up to 6m0s for node "multinode-720500" to be "Ready" ...
	I0603 14:50:44.114641    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:44.114641    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:44.114641    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:44.114641    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:44.118171    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:44.118171    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:44.118171    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:44.118171    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:44.119147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:44 GMT
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Audit-Id: e111bc66-e96c-4449-9dfc-b7a08b199cd6
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:44.119355    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:44.619879    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:44.619879    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:44.619879    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:44.619994    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:44.624505    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:44.624549    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:44.624549    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:44.624549    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:44 GMT
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Audit-Id: 11c8216b-bef0-4230-9940-6ce810c6b064
	I0603 14:50:44.624630    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:45.117020    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:45.117020    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:45.117020    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:45.117020    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:45.120599    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:45.120599    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Audit-Id: 8a723eff-9ee1-401c-b716-68f704c82417
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:45.121518    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:45.121518    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:45 GMT
	I0603 14:50:45.121749    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:45.621560    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:45.621560    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:45.621560    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:45.621560    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:45.625483    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:45.625483    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Audit-Id: f80c4814-5055-481a-89cc-1799a3aff349
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:45.625483    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:45.625483    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:45 GMT
	I0603 14:50:45.625483    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:46.127588    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:46.127588    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:46.127588    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:46.127588    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:46.141209    9752 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 14:50:46.141209    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:46.141209    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:46.141420    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:46 GMT
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Audit-Id: bb144405-5e94-401b-bd71-2656fb8db0c9
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:46.144803    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:46.145315    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:46.631293    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:46.631293    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:46.631293    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:46.631293    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:46.634017    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:46.634017    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:46.634017    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:46 GMT
	I0603 14:50:46.634017    9752 round_trippers.go:580]     Audit-Id: 0a2480ec-37d6-4f5c-8779-be70230aa0c3
	I0603 14:50:46.635076    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:46.635076    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:46.635076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:46.635076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:46.635375    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:47.127871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:47.127871    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:47.128100    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:47.128100    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:47.131894    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:47.131894    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Audit-Id: fbe126e8-e878-426f-8527-30f8df41f7eb
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:47.132878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:47.132878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:47.132878    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:47 GMT
	I0603 14:50:47.133318    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:47.615681    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:47.615763    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:47.615828    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:47.615828    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:47.619702    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:47.620263    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:47.620263    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:47 GMT
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Audit-Id: 9804b620-ec73-42fb-a04d-a99c32ddb9ba
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:47.620263    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:47.620966    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.115800    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:48.115800    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:48.115800    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:48.115800    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:48.121383    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:48.121467    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Audit-Id: a4330963-e56f-4667-8df0-8ee19cd77160
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:48.121494    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:48.121545    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:48.121545    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:48 GMT
	I0603 14:50:48.122858    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.616329    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:48.616329    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:48.616329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:48.616329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:48.620502    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:48.620502    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Audit-Id: f242052b-0d44-4c84-b52e-649abd5ee96b
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:48.620593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:48.620593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:48 GMT
	I0603 14:50:48.621005    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.622096    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:49.116216    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:49.116216    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:49.116216    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:49.116216    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:49.119884    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:49.120656    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:49.120656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:49 GMT
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Audit-Id: b6a39755-e0d5-4d27-af05-f962c54952b3
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:49.120656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:49.120656    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:49.616840    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:49.616840    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:49.617053    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:49.617053    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:49.623173    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:49.623173    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:49.623173    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:49.623173    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:49 GMT
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Audit-Id: 4ef321e7-4d0a-4a59-bbf5-7425d6368be2
	I0603 14:50:49.623694    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:49.623894    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.117132    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:50.117386    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:50.117443    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:50.117443    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:50.121727    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:50.121793    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:50.121793    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:50.121793    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:50 GMT
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Audit-Id: a065e32c-9413-4390-b230-45d724bd4c7a
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:50.121793    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.621134    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:50.621251    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:50.621316    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:50.621316    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:50.624993    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:50.625162    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Audit-Id: 7dff31d5-33f2-43b0-b384-136459e283f8
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:50.625162    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:50.625162    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:50 GMT
	I0603 14:50:50.625845    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.626362    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:51.123803    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:51.123954    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:51.123954    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:51.123954    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:51.128574    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:51.128574    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Audit-Id: 63b0a5a8-2ee5-4fb9-9d1e-e164bf1ceab1
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:51.128574    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:51.128574    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:51 GMT
	I0603 14:50:51.128574    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:51.625484    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:51.625569    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:51.625569    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:51.625569    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:51.628684    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:51.628684    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:51 GMT
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Audit-Id: 90c4c16c-1a15-49d6-ad6b-1caa95268a73
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:51.628684    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:51.628684    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:51.630450    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:52.125931    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:52.125931    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:52.125931    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:52.125931    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:52.129550    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:52.129550    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:52 GMT
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Audit-Id: 142575e2-f9f6-4d54-b29a-e0f2c2257dbf
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:52.129550    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:52.129550    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:52.129550    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:52.618507    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:52.618507    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:52.618507    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:52.618507    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:52.624114    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:52.624114    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:52.624114    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:52.624114    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:52.624114    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:52 GMT
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Audit-Id: cfb88214-40a5-42b2-b64d-da77a76991bb
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:52.625101    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:53.120865    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:53.120865    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:53.120865    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:53.120865    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:53.125578    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:53.125578    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:53.125578    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:53 GMT
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Audit-Id: ed9f4b92-b597-427c-94c0-845d19732cb8
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:53.125843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:53.125843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:53.126065    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:53.126189    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:53.618677    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:53.618677    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:53.618677    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:53.618677    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:53.622288    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:53.623102    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:53.623102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:53.623102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:53 GMT
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Audit-Id: 0a159cc5-3307-4f3e-bbed-7afb5f785f1e
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:53.624369    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:54.118646    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:54.118646    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:54.118646    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:54.118646    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:54.122213    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:54.122213    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:54.122213    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:54.122843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:54.122843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:54 GMT
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Audit-Id: dfeec84e-0cfc-4606-b59c-19a0da83fa44
	I0603 14:50:54.122843    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:54.625959    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:54.626289    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:54.626289    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:54.626289    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:54.631578    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:54.632619    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:54.632619    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:54 GMT
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Audit-Id: 63234624-7d5b-4158-9c65-ba2a01220a7f
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:54.632709    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:54.632709    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:54.633044    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:55.125344    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:55.125344    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:55.125344    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:55.125344    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:55.129614    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:55.129614    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Audit-Id: 04711d7c-7579-4b8c-81e5-9337dadb9007
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:55.129614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:55.129614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:55 GMT
	I0603 14:50:55.130187    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:55.130913    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:55.624434    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:55.624564    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:55.624564    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:55.624564    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:55.632792    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:50:55.632792    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Audit-Id: f7e03f1d-a29e-4f34-aac7-e6d5b46d1676
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:55.632792    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:55.632792    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:55 GMT
	I0603 14:50:55.632792    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:56.126700    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:56.126700    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:56.126700    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:56.126785    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:56.131521    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:56.131584    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Audit-Id: 679db11a-a050-4318-a55a-218dfb801e32
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:56.131584    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:56.131584    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:56 GMT
	I0603 14:50:56.132443    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:56.623012    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:56.623012    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:56.623012    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:56.623012    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:56.627893    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:56.627893    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Audit-Id: 21231a93-82fa-4d46-bd84-5cea81fbcdb9
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:56.627893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:56.627893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:56 GMT
	I0603 14:50:56.627893    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.120771    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:57.120890    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:57.120890    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:57.120890    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:57.125709    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:57.126409    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Audit-Id: 21f3fae7-37ff-41ec-92bb-1ad85b073205
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:57.126409    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:57.126409    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:57 GMT
	I0603 14:50:57.126538    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.620506    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:57.620506    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:57.620625    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:57.620625    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:57.625926    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:57.625926    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:57.626035    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:57.626035    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:57.626035    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:57 GMT
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Audit-Id: 5f7aeb7a-d5a1-4885-b93f-024c0895f285
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:57.626428    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.626633    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:58.121639    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:58.121639    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:58.121639    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:58.121639    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:58.125234    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:58.125234    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:58.125234    9752 round_trippers.go:580]     Audit-Id: 1e3d377b-5ea2-4ad8-af09-76102f22e181
	I0603 14:50:58.125234    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:58.125495    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:58.125495    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:58.125495    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:58.125495    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:58 GMT
	I0603 14:50:58.126430    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:58.618210    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:58.618474    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:58.618474    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:58.618474    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:58.621874    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:58.621874    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Audit-Id: 2459caa2-56c5-4a30-bf1b-b87d0287d38f
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:58.621874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:58.621874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:58 GMT
	I0603 14:50:58.622734    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.131147    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:59.131147    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:59.131147    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:59.131147    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:59.135357    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:59.135379    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:59.135379    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:59 GMT
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Audit-Id: 08a6700a-ec14-4dd8-b1b6-b901da8e9da6
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:59.135472    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:59.135472    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:59.135645    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.625621    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:59.625704    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:59.625704    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:59.625704    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:59.629532    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:59.629532    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:59.629925    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:59 GMT
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Audit-Id: 2b16e639-54d8-4963-9632-80e4cd30565b
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:59.629925    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:59.629925    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.630762    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:00.118291    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:00.118533    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:00.118533    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:00.118533    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:00.122411    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:00.122411    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:00.122411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:00.122411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:00 GMT
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Audit-Id: e0d2ab5b-52b6-4f3f-ad61-ba5cb51f81aa
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:00.122657    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:00.627785    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:00.627785    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:00.627785    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:00.628040    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:00.631259    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:00.632191    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Audit-Id: 16c4791e-2245-45f2-90d0-86de4b8c6f5a
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:00.632191    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:00.632257    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:00.632257    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:00 GMT
	I0603 14:51:00.632352    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.120137    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:01.120137    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:01.120137    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:01.120137    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:01.127104    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:01.127104    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:01.127104    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:01 GMT
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Audit-Id: d20e5242-88bb-48d2-afdd-bf50550a0b8b
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:01.127104    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:01.127104    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.627946    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:01.628303    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:01.628303    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:01.628303    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:01.631639    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:01.631929    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Audit-Id: 425a9b5b-ff68-4146-9f10-fe76a714a9be
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:01.631929    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:01.631929    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:01 GMT
	I0603 14:51:01.632362    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.632853    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:02.122331    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:02.122331    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:02.122331    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:02.122629    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:02.127498    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:02.127498    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:02.127498    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:02.127498    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:02 GMT
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Audit-Id: 913a1be9-23fe-417a-bab1-1acb0afdfd10
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:02.127705    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:02.127972    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:02.625414    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:02.625644    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:02.625644    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:02.625644    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:02.632515    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:02.632515    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:02.632515    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:02 GMT
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Audit-Id: 179f32dc-1fa8-4abb-b807-a9c2272e6df6
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:02.632515    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:02.633248    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:03.125320    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:03.125320    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:03.125320    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:03.125320    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:03.131768    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:03.131861    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:03.131874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:03 GMT
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Audit-Id: 0749c042-ac75-4284-a6e3-dbe12850d383
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:03.131874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:03.133117    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:03.627026    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:03.627349    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:03.627349    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:03.627349    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:03.631893    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:03.631893    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:03.631893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:03.631893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:03 GMT
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Audit-Id: e8f02c8c-7148-418a-8fe2-68db6af2fd17
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:03.631893    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:04.127071    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:04.127071    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:04.127071    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:04.127071    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:04.131594    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:04.131594    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:04.131594    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:04 GMT
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Audit-Id: 5f68bd15-8cc9-418b-8c8f-d5164128b955
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:04.132579    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:04.132579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:04.132969    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:04.133542    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:04.616515    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:04.616515    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:04.616515    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:04.616515    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:04.620830    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:04.620830    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:04.620830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:04.620830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:04 GMT
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Audit-Id: 207aa0ba-2430-42aa-9735-fdece6cc9c76
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:04.620830    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:05.116583    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:05.116583    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:05.116583    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:05.116583    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:05.120184    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:05.120184    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Audit-Id: cf44b14b-b080-42d5-b843-0853df5f75d0
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:05.121219    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:05.121219    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:05.121272    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:05 GMT
	I0603 14:51:05.121525    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:05.618317    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:05.618317    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:05.618317    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:05.618317    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:05.622812    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:05.623357    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Audit-Id: 9e7b0424-b565-4d90-ac2a-e1655bac4f84
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:05.623428    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:05.623428    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:05.623428    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:05 GMT
	I0603 14:51:05.623686    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.118466    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:06.118584    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:06.118639    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:06.118639    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:06.122208    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:06.122208    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:06.122208    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:06.122208    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:06.122399    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:06.122399    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:06.122399    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:06 GMT
	I0603 14:51:06.122399    9752 round_trippers.go:580]     Audit-Id: 8def6d80-1301-4d24-a58e-316862413164
	I0603 14:51:06.122594    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.620960    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:06.620960    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:06.620960    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:06.620960    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:06.624717    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:06.624717    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:06.625441    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:06.625441    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:06 GMT
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Audit-Id: ff9bdd93-67bb-4e6d-860e-dcb39944ecaf
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:06.625781    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.626350    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:07.122480    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:07.122480    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:07.122480    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:07.122698    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:07.128714    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:07.128714    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:07.128714    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:07.128714    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:07 GMT
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Audit-Id: c2ca4302-b910-41e4-a837-60ae14349d6f
	I0603 14:51:07.129502    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:07.623294    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:07.623294    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:07.623294    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:07.623294    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:07.629960    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:07.629960    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:07.629960    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:07 GMT
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Audit-Id: 22181386-77ab-495e-b4d7-03be4ed61ebb
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:07.629960    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:07.630612    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.124269    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:08.124269    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:08.124269    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:08.124269    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:08.129297    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:08.129297    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Audit-Id: 20eddc7c-d9ce-4fc1-babe-e5d5f39046bb
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:08.129297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:08.129297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:08 GMT
	I0603 14:51:08.130039    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.622777    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:08.622837    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:08.622909    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:08.622909    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:08.626844    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:08.626844    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:08.626844    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:08.626844    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:08 GMT
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Audit-Id: 8b8028ce-1346-4566-ad33-ab4ba9627375
	I0603 14:51:08.626844    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.627690    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:09.124919    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:09.125046    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:09.125046    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:09.125159    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:09.128418    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:09.129094    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:09 GMT
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Audit-Id: d15f1ed7-a3b5-4516-9a26-a51d7092044b
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:09.129094    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:09.129094    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:09.129294    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:09.621871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:09.621958    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:09.621958    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:09.621958    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:09.626656    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:09.626760    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:09.626760    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:09.626760    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:09 GMT
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Audit-Id: bd2c293c-6b08-4375-9505-36b8c8461e69
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:09.627238    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.124522    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:10.124522    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:10.124629    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:10.124629    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:10.127953    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:10.127953    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Audit-Id: 20366f6b-3218-4e16-8331-3b20ae03a1e7
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:10.127953    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:10.127953    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:10 GMT
	I0603 14:51:10.129290    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.623257    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:10.623466    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:10.623466    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:10.623466    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:10.626878    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:10.626878    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:10 GMT
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Audit-Id: a7ac0590-865d-4fbf-a917-f6cfc9449896
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:10.626878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:10.626878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:10.628175    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.629144    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:11.127780    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:11.127780    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:11.128047    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:11.128047    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:11.134519    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:11.134519    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Audit-Id: c351e713-70c0-4e44-b397-34c65689d556
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:11.134519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:11.134519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:11 GMT
	I0603 14:51:11.134519    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:11.628606    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:11.628779    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:11.628779    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:11.628779    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:11.632505    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:11.632505    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:11.632505    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:11.632505    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:11.632505    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:11 GMT
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Audit-Id: 56d235dc-a551-4b51-8981-5fdc185c4d29
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:11.633599    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:12.117709    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:12.117709    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:12.117709    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:12.117709    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:12.121306    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:12.121306    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:12.122187    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:12 GMT
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Audit-Id: 2b22cafe-f798-45ac-81f8-e0eecced20fb
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:12.122187    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:12.123012    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:12.618185    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:12.618185    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:12.618185    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:12.618185    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:12.621852    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:12.621897    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Audit-Id: 3d136173-3187-47ca-8d9f-c3fb31ed2c4b
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:12.621897    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:12.621982    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:12.621982    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:12 GMT
	I0603 14:51:12.622300    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:13.120084    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:13.120174    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:13.120174    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:13.120174    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:13.124031    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:13.124031    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:13.124031    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:13.124031    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:13 GMT
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Audit-Id: 80164430-8d7c-4c78-84e5-03c5126a06f4
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:13.124286    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:13.124400    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:13.125168    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:13.615525    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:13.615619    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:13.615619    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:13.615619    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:13.622213    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:13.622213    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:13.622213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:13.622213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:13 GMT
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Audit-Id: 2ac43b9b-eb13-42e1-b4e9-ec54d85982f7
	I0603 14:51:13.622371    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:13.622824    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:14.116221    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:14.116534    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:14.116617    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:14.116617    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:14.120367    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:14.120648    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:14.120648    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:14 GMT
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Audit-Id: c6c4e8dd-3a89-4ea4-8e9e-4b65518e7619
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:14.120730    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:14.120730    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:14.121154    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:14.617882    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:14.617962    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:14.617962    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:14.617962    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:14.621411    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:14.621411    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:14.621411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:14.622332    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:14 GMT
	I0603 14:51:14.622388    9752 round_trippers.go:580]     Audit-Id: 83922066-a321-4773-a639-2c49d96f76bd
	I0603 14:51:14.622431    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:14.622431    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:14.622431    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:14.622487    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.117969    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:15.118258    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:15.118258    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:15.118258    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:15.122117    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:15.122117    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:15.122117    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:15.122117    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:15 GMT
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Audit-Id: da9b409a-372f-4bb7-a800-84762be38f6c
	I0603 14:51:15.122117    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.630457    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:15.630457    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:15.630457    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:15.630457    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:15.635439    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:15.635439    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Audit-Id: 615a36cc-a4aa-4304-82fe-5097bdc9324c
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:15.635439    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:15.635439    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:15 GMT
	I0603 14:51:15.635439    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.636443    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:16.130187    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:16.130187    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:16.130187    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:16.130187    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:16.132839    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:16.132839    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:16.132839    9752 round_trippers.go:580]     Audit-Id: aed891a2-f0fb-44f6-a41a-352e6bd51eac
	I0603 14:51:16.132839    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:16.133854    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:16.133854    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:16.133854    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:16.133854    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:16 GMT
	I0603 14:51:16.133957    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:16.630469    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:16.630469    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:16.630469    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:16.630469    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:16.635959    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:16.636021    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:16.636021    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:16.636021    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:16 GMT
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Audit-Id: d8cd06b8-729e-4056-be4e-1ac6a386ac0d
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:16.636546    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:17.117038    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:17.117384    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:17.117384    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:17.117473    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:17.121811    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:17.122443    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:17.122443    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:17 GMT
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Audit-Id: f851d33e-f5db-499d-b509-383d0c6bf0d3
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:17.122443    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:17.122678    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:17.621881    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:17.621881    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:17.621881    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:17.621881    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:17.625713    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:17.625713    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:17 GMT
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Audit-Id: 7789b2c2-e784-4503-96c5-fe36a1ffcd2c
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:17.626358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:17.626358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:17.626889    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:18.120091    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:18.120190    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:18.120190    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:18.120190    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:18.123982    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:18.124386    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:18.124386    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:18.124386    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:18 GMT
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Audit-Id: d699f7b7-b4c5-4a22-a87b-f56f83980769
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:18.124631    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:18.125141    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:18.619372    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:18.619481    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:18.619481    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:18.619481    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:18.623928    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:18.623928    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:18.623928    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:18.623928    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:18 GMT
	I0603 14:51:18.623928    9752 round_trippers.go:580]     Audit-Id: 70dd0080-c668-496f-beda-cb99e58719ec
	I0603 14:51:18.624070    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:18.624070    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:18.624070    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:18.624937    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:19.116510    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:19.116876    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:19.116876    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:19.116876    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:19.123353    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:19.123353    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:19.123353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:19 GMT
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Audit-Id: 8be1ffa6-5e4c-4225-bda8-2a6480368274
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:19.123353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:19.123353    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:19.617543    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:19.617543    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:19.617800    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:19.617800    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:19.621225    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:19.621225    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Audit-Id: 7684cd2f-3dcb-466d-b846-21a972f24581
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:19.621225    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:19.621225    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:19 GMT
	I0603 14:51:19.622347    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:20.119385    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.119385    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.119385    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.119385    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.123664    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:20.123664    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.123664    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.123751    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.123751    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Audit-Id: 652b7f77-80fe-41cf-b711-6625ca26244c
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.124019    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:20.619112    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.619112    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.619112    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.619112    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.622734    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:20.623166    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.623166    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.623166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.623166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.623166    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.623239    9752 round_trippers.go:580]     Audit-Id: 4fecd7ea-3a9d-4e3d-af7d-eddcb53d8ddd
	I0603 14:51:20.623239    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.623239    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:20.624216    9752 node_ready.go:49] node "multinode-720500" has status "Ready":"True"
	I0603 14:51:20.624287    9752 node_ready.go:38] duration metric: took 36.5093044s for node "multinode-720500" to be "Ready" ...
	I0603 14:51:20.624314    9752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:51:20.624410    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:20.624495    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.624495    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.624495    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.632842    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:51:20.632842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Audit-Id: 91450bf1-4ce4-4a0b-9837-3e5d395e2e6e
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.632842    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.632842    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.634212    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1959"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87038 chars]
	I0603 14:51:20.637850    9752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:20.637850    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:20.637850    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.637850    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.638380    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.641347    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:20.641347    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.641347    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.641347    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Audit-Id: bb371103-81d3-4653-b86c-15dcfeb2e90e
	I0603 14:51:20.641630    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:20.642228    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.642284    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.642284    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.642284    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.643620    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:20.643620    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.643620    9752 round_trippers.go:580]     Audit-Id: b518e2ae-de04-49df-b025-03d350dd632b
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.644721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.644721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.644721    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:21.148350    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:21.148422    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.148422    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.148422    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.153779    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:21.153845    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.153845    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Audit-Id: 384cf0a4-0904-48bb-a05f-857e76431560
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.153845    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.153845    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:21.154917    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:21.154989    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.154989    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.154989    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.159240    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:21.159240    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Audit-Id: aaff5379-96dc-4804-80cb-63ce111dd3cb
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.159900    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.159900    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.159900    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.160055    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:21.646035    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:21.646261    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.646261    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.646261    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.649633    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:21.650303    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.650303    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.650303    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Audit-Id: a0ab756c-db81-4bef-a29c-5c2f16c7c946
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.650303    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:21.651639    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:21.651639    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.651738    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.651738    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.654595    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:21.654595    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.654595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.654595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Audit-Id: 1b248904-9e68-41a8-b70b-36b890e04af6
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.655876    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.145882    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:22.145985    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.145985    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.145985    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.150436    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:22.150436    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Audit-Id: 1f616cdf-fd49-4a1d-94b4-7be135d32db0
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.150579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.150579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.150860    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:22.151548    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:22.151634    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.151634    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.151634    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.155369    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:22.155369    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.155369    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.155369    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.155369    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.155456    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.155456    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.155456    9752 round_trippers.go:580]     Audit-Id: c3fbcea9-3d5f-43ee-bb0e-3e17e7ee651f
	I0603 14:51:22.155673    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.645857    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:22.645976    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.645976    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.645976    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.649459    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:22.649459    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.649459    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.649806    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Audit-Id: 0076cec9-bc97-4b1b-a213-2a22fb01e849
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.650045    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:22.650881    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:22.650881    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.650881    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.650881    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.657479    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:22.657479    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Audit-Id: dd627c4f-be77-41e8-b956-a2244c41cb2b
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.657479    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.657479    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.658044    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.658279    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:23.143043    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:23.143327    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.143327    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.143327    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.146769    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.146769    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.146769    9752 round_trippers.go:580]     Audit-Id: 2965518a-59e9-446f-813e-6e838b2bb701
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.147382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.147382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.147573    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:23.148371    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:23.148480    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.148480    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.148480    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.152314    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.152314    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Audit-Id: 1286abdf-4034-4189-a2c5-3228082a5d8e
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.152314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.152314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.152314    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:23.648495    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:23.648570    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.648570    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.648570    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.652317    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.652834    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.652834    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Audit-Id: bb4b0096-aa1b-4b6a-b974-15073a793340
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.652834    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.653145    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:23.653471    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:23.653471    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.653471    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.653471    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.660415    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:23.660415    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.660415    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.660415    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.660415    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.660519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.660519    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.660519    9752 round_trippers.go:580]     Audit-Id: 7a7271f0-5161-42ed-9db4-f52edff31af1
	I0603 14:51:23.660574    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.148329    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:24.148329    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.148329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.148329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.152745    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:24.153216    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.153216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.153216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Audit-Id: 26f0014e-defc-4c70-8525-7dce10e000e7
	I0603 14:51:24.154048    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:24.154797    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:24.154797    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.154797    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.154797    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.157641    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:24.157641    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Audit-Id: ea8e37d2-e9dd-4674-9e17-08ee0c5e2282
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.158457    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.158457    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.158514    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.650894    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:24.650894    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.650894    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.650894    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.654862    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:24.654967    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.654967    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.654967    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.654967    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.654967    9752 round_trippers.go:580]     Audit-Id: 8d8b46f5-93f3-4a37-b03e-2d95020f7172
	I0603 14:51:24.655059    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.655059    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.655310    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:24.655960    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:24.655960    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.655960    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.655960    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.659616    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:24.659616    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.659616    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Audit-Id: d761579c-a1f5-4fcd-94d6-9a10d971e380
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.659616    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.660147    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.660697    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:25.147282    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:25.147282    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.147353    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.147353    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.151000    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:25.151588    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Audit-Id: aff97a1d-ab63-467d-9b61-ac7e144da460
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.151588    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.151588    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.151762    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:25.152544    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:25.152624    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.152624    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.152624    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.155216    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:25.155216    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.155216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.155216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Audit-Id: 8443f917-b36c-4b81-ac73-01bd81d50672
	I0603 14:51:25.155751    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:25.647077    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:25.647163    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.647163    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.647163    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.651703    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:25.651703    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.651703    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.651703    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.651801    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Audit-Id: 1ac74dbd-3751-4813-a9b3-a72cf029807a
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.651864    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:25.652704    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:25.652770    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.652770    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.652770    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.657179    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:25.657316    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Audit-Id: caa0aa34-6b35-49e9-bded-272cbe771523
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.657534    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.657566    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.657566    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.658120    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:26.152409    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:26.152671    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.152671    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.152671    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.156583    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:26.156583    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Audit-Id: 49a0f896-20eb-472c-8b0e-0d3c1f83b38c
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.156670    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.156670    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.156726    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:26.157516    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:26.157516    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.157516    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.157516    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.160111    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:26.160111    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Audit-Id: 2b9fd02f-166a-46bb-8021-4b9463b8914a
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.160111    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.160111    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.161100    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:26.642976    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:26.642976    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.643094    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.643094    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.646465    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:26.646465    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.646465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.646465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Audit-Id: eb509d35-8b5f-400d-ad92-4bdbc2447f19
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.647882    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:26.648015    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:26.648605    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.648605    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.648605    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.650893    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:26.650893    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Audit-Id: d2cb2ffd-854b-4752-9daf-08b581750d0e
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.651525    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.651525    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.651876    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:27.141546    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:27.141546    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.141546    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.141546    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.145146    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:27.145146    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Audit-Id: f3a66e1f-f599-4b2a-805a-45e589abf079
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.145358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.145358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.146089    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:27.147032    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:27.147032    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.147032    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.147032    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.149830    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:27.149830    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Audit-Id: 1cecb31d-addf-4706-a4b0-ebf6781d0646
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.149956    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.149956    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.150387    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:27.150819    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:27.645657    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:27.645657    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.645657    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.645657    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.649315    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:27.649315    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.649315    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.649315    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Audit-Id: e523d6f9-1c36-4356-a356-26ba3ddc439c
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.650475    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:27.651356    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:27.651413    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.651413    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.651413    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.654180    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:27.654278    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.654278    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.654278    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Audit-Id: c4705820-06b4-4ec3-a554-93b9829efcd6
	I0603 14:51:27.654365    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.654860    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:28.143016    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:28.143016    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.143016    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.143016    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.147969    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:28.148007    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Audit-Id: ec09f366-faf4-452c-ae46-0ef7a2db4532
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.148007    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.148007    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.148007    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:28.149289    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:28.149406    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.149406    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.149406    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.151665    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:28.151665    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Audit-Id: 780dc03e-047f-417a-b40f-8335453d31b3
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.151665    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.151665    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.151665    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:28.648416    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:28.648535    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.648602    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.648602    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.654547    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:28.654604    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Audit-Id: 7bb723f2-3a20-4367-b3b3-26cca104305b
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.654604    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.654604    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.655353    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:28.656182    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:28.656182    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.656182    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.656182    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.660317    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:28.661203    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Audit-Id: b485a01b-ad2c-4bfe-8699-8c97857da98c
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.661203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.661203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.661376    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:29.150213    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:29.150213    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.150213    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.150213    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.153798    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.153798    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Audit-Id: ef433026-9f16-45d4-a8a5-0e5967f2f372
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.154739    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.154739    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.154739    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.154922    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:29.155648    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:29.155648    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.155648    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.155648    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.157517    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:29.158249    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Audit-Id: ca154fcc-9b68-49e0-94ee-472316b55932
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.158351    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.158382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.158382    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.158645    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:29.159152    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:29.648244    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:29.648409    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.648486    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.648486    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.652614    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.652614    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Audit-Id: e92bb439-6918-4bb1-abd0-70e21c07a802
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.652614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.652614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.652978    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:29.653311    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:29.653311    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.653311    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.653311    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.657023    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.657023    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.657023    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.657102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.657102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Audit-Id: 5e2543e9-d088-4782-85de-43f615b3b5d1
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.657558    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:30.147318    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:30.147318    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.147318    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.147318    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.150906    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:30.151543    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.151543    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.151543    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.151543    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.151543    9752 round_trippers.go:580]     Audit-Id: f9ab0c38-f0ee-45f1-b445-e4ac54a42425
	I0603 14:51:30.151683    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.151683    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.152276    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:30.153024    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:30.153157    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.153157    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.153157    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.156895    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:30.157027    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Audit-Id: 4997e132-5510-4f10-ab88-fe85a144e703
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.157027    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.157076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.157480    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:30.648887    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:30.649073    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.649073    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.649073    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.653502    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:30.653502    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.653502    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.653502    9752 round_trippers.go:580]     Audit-Id: a01fa2fc-8243-4dcc-b5f7-5cb035420923
	I0603 14:51:30.653726    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.653726    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.653726    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.653726    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.654183    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:30.655094    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:30.655094    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.655094    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.655209    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.657307    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:30.657307    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Audit-Id: b1ff48fd-45c8-484c-b75f-ce51a5a1c0e0
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.657307    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.657307    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.657719    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:31.152843    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:31.152843    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.152843    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.152843    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.158166    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:31.158166    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Audit-Id: e4a5e6ab-b5f6-4355-8954-072dc4f66296
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.158166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.158166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.158166    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:31.159126    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:31.159192    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.159192    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.159192    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.161456    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:31.161456    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.161456    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Audit-Id: e841f044-67e7-4589-bfd1-b177e9b2764d
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.162297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.162994    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:31.163545    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:31.638468    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:31.638531    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.638531    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.638531    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.645620    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:51:31.645620    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.645620    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.645620    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Audit-Id: 16a229e4-e76a-46ea-aabd-fc722bc1f0b0
	I0603 14:51:31.645620    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:31.646663    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:31.646663    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.646663    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.646663    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.649251    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:31.649952    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Audit-Id: f9fbf51a-5f0e-402e-989c-d3b40a842fce
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.649952    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.649952    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.650263    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:32.152429    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:32.152429    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.152429    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.152429    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.157040    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:32.157040    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Audit-Id: 31ae400a-f025-4d3a-8b55-ad2774a9279b
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.157040    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.157040    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.157040    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:32.158207    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:32.158207    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.158207    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.158207    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.161270    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:32.161270    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.161270    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Audit-Id: deb4bab4-c395-4a96-b2f1-4d558a2c5618
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.161920    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.161920    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:32.649759    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:32.649759    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.649759    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.649759    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.655609    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:32.656046    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.656046    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.656046    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Audit-Id: 31371580-5bdb-43a5-b7e9-b9daa5d0fad8
	I0603 14:51:32.656297    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:32.657117    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:32.657189    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.657189    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.657189    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.659616    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:32.660536    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.660536    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.660536    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Audit-Id: 24ebbae7-d7dd-447a-9f61-66e8fc940940
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.660536    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.146474    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:33.146665    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.146665    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.146732    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.151962    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:33.151962    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.152048    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.152048    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Audit-Id: ef1e3b02-c660-4b2a-9f9b-06403132b44f
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.152307    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:33.153135    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:33.153192    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.153192    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.153192    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.159425    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:33.159425    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.159830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.159830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Audit-Id: 2dfff49a-d1b9-4bc4-a8ae-8efd67816c3c
	I0603 14:51:33.160197    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.645010    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:33.645010    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.645010    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.645010    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.647814    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:33.648728    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.648793    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.648835    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.648835    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.648835    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.648835    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.648866    9752 round_trippers.go:580]     Audit-Id: 888366a0-5d71-4b32-96b8-2791019c6de9
	I0603 14:51:33.648866    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:33.649536    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:33.649536    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.649536    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.649536    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.654224    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:33.654224    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.654224    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.654283    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.654283    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Audit-Id: 543a9291-ece9-43cf-802c-b5b1b43b9bf7
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.654573    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.654573    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:34.146636    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:34.146884    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.146884    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.146884    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.150944    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:34.151816    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.151816    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Audit-Id: e89babfd-d75a-4f66-8863-8196832f6316
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.151901    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.151901    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:34.153150    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:34.153182    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.153182    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.153182    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.156115    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:34.156115    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Audit-Id: c63d2ebc-a699-42b7-ab14-e3d33a0f6131
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.156115    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.156115    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.157418    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:34.646489    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:34.646680    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.646680    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.646680    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.651068    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:34.651206    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.651206    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.651206    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Audit-Id: 74365199-606c-47fa-b935-79592359b1df
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.651437    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:34.652539    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:34.652610    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.652610    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.652610    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.655874    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:34.655874    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Audit-Id: 657dc101-8239-4709-ae75-76c4363e0595
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.655874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.655874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.656313    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.147440    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:35.147440    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.147440    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.147440    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.152048    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:35.152226    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.152226    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.152226    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Audit-Id: 83a6614c-2b20-4bb2-8f5b-0bf861852361
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.153034    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:35.153807    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:35.153866    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.153866    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.153866    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.156609    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:35.156609    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.157053    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.157053    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Audit-Id: 2129a726-1428-4d13-afa4-4183f98ee26d
	I0603 14:51:35.157053    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.645521    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:35.645521    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.645521    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.645521    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.649431    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:35.649495    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.649659    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.649722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.649722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.649722    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.649722    9752 round_trippers.go:580]     Audit-Id: 4ba95eab-4df5-440b-a92a-684895aaf0cc
	I0603 14:51:35.649850    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.649918    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:35.650572    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:35.650572    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.650572    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.650572    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.657229    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:35.657229    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Audit-Id: 3829156e-8895-485e-9994-a26645615f68
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.657229    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.657229    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.657229    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.657957    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:36.144916    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:36.145142    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.145142    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.145142    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.148488    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:36.148970    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Audit-Id: 1432260b-62ac-45a7-8402-a63a48acdd20
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.149214    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.149214    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.149214    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.149470    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:36.150842    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:36.150943    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.150943    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.150943    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.153332    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:36.153332    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.154559    9752 round_trippers.go:580]     Audit-Id: e6a6add3-ab12-4ffc-a53c-abcc1c3b74f2
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.154598    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.154598    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.154887    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:36.645997    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:36.646225    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.646225    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.646225    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.651600    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:36.651600    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.651600    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.651600    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Audit-Id: 2813be50-e0ea-44f5-ad2b-c24caed18ecf
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.651721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.651872    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:36.652561    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:36.652722    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.652722    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.652722    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.654461    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:36.655383    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.655383    9752 round_trippers.go:580]     Audit-Id: 1cd78818-f09f-408b-b2af-c8baf3d12c9b
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.655444    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.655444    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.655834    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:37.142181    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:37.142181    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.142181    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.142181    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.146768    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:37.146768    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Audit-Id: c27f7aaf-75a3-471f-9f63-4e3361ee29f4
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.146768    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.146768    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.147772    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:37.148871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:37.148871    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.148947    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.148947    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.151795    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:37.152264    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Audit-Id: e770e9e4-ecdf-4d56-b48e-22ad0963b5db
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.152264    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.152264    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.152264    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:37.642441    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:37.642441    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.642575    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.642575    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.645935    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:37.646886    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.646886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Audit-Id: 5da79e4c-8a98-4eec-8efa-fe1e1c93a034
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.646886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.647194    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:37.647998    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:37.648086    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.648086    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.648086    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.651295    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:37.651295    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.651295    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Audit-Id: f219361f-a093-4450-b077-ad6079309455
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.651295    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.652265    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:38.140740    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:38.140740    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.140740    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.140740    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.144377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:38.144377    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Audit-Id: fc5e0d3f-abae-49fd-a077-f03f17c5d595
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.144890    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.144890    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.145029    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:38.145432    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:38.145432    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.145432    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.145432    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.150746    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:38.150746    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Audit-Id: cb06cedd-b185-4f75-9516-5245ce271c09
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.150746    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.150746    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.151513    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:38.152079    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:38.640296    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:38.640296    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.640296    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.640296    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.645203    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:38.645203    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Audit-Id: cb6d63fc-9765-4bd4-88d5-111a297874fe
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.645203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.645203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.645203    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:38.648274    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:38.648341    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.648341    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.648341    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.651612    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:38.651647    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.651647    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Audit-Id: 78b86873-cd0e-4fc9-a129-94981a2e8fc3
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.651685    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.651685    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.652358    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:39.146667    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:39.146913    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.146913    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.146913    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.151345    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:39.151424    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Audit-Id: e472a206-dd76-419d-984d-062d301fa34c
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.151424    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.151424    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.152793    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:39.153562    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:39.153562    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.153562    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.153643    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.159010    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:39.159010    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Audit-Id: a8377e39-c1ca-4ed9-922a-3e05bc2048a4
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.159010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.159010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.159631    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:39.649304    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:39.649304    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.649304    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.649304    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.653828    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:39.653890    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.653954    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.653954    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Audit-Id: ac2886e3-9c4e-4cc0-b4b6-9b790560edd1
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.653954    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:39.654983    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:39.654983    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.654983    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.654983    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.658561    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:39.658753    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Audit-Id: 5da870b9-59d5-4c71-82cf-e889187d4ad5
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.658753    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.658753    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.659320    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:40.147435    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:40.147435    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.147545    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.147545    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.150230    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:40.151354    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.151354    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.151354    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Audit-Id: 363afa80-b5a2-4062-ac01-b085038fb402
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.151354    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:40.152093    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:40.152093    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.152093    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.152093    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.155143    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:40.155143    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.155143    9752 round_trippers.go:580]     Audit-Id: 3dea9a9b-c0ff-40af-87db-8cc4da665dd4
	I0603 14:51:40.155143    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.155678    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.155678    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.155678    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.155773    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.155773    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:40.156650    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:40.645557    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:40.645557    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.645557    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.645557    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.649404    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:40.649404    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.649404    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Audit-Id: 547100ae-3316-4b6f-8108-fe657f2fe507
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.649886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.650753    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:40.651511    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:40.651511    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.651511    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.651511    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.654712    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:40.654712    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Audit-Id: c1efd975-cb42-4093-9c06-d489ccf04bbf
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.654777    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.654777    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.655247    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:41.144992    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:41.144992    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.144992    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.144992    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.149643    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:41.149643    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.149778    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.149778    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Audit-Id: 19d53dca-c102-4494-865f-05614e5d2c57
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.150016    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:41.150864    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:41.150864    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.150864    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.150864    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.158020    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:51:41.158174    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.158174    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Audit-Id: 52846a85-8338-4b55-8f09-f4d58933ff1f
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.158244    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.158576    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:41.649534    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:41.649608    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.649608    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.649608    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.653119    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:41.653593    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Audit-Id: 0efaf407-2ccf-4da6-a5ae-f9ab2f785867
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.653593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.653593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.653911    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:41.654564    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:41.654564    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.654794    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.654794    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.657924    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:41.657924    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Audit-Id: 7b8ce6ec-03e7-41e2-b909-7ca6b1113fd7
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.657924    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.657924    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.658474    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.658717    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:42.149340    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:42.149552    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.149552    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.149552    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.152790    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:42.153791    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Audit-Id: f384d4fd-a218-496a-a9f7-a68c1290ab6d
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.153841    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.153841    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.154242    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:42.155166    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:42.155166    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.155239    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.155239    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.157805    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:42.158855    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.158855    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.158914    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.158914    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Audit-Id: a08dc58a-15d6-4ada-9270-a4b9d6a0f773
	I0603 14:51:42.159227    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:42.160405    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:42.650022    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:42.650022    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.650144    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.650144    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.653439    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:42.653919    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Audit-Id: f52474b9-bea2-472c-a70e-1369afca95c2
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.653919    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.653919    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.654435    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:42.655949    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:42.655949    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.655949    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.655949    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.658545    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:42.659101    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.659101    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.659101    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Audit-Id: 4737e533-bb03-4ba1-9e49-6fe2edacd8b9
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.659605    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:43.148213    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:43.148290    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.148290    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.148290    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.152886    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:43.152975    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Audit-Id: 92ddfaf5-575f-45e1-ae35-157abe919f3c
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.152975    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.152975    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.153560    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:43.154404    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:43.154516    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.154516    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.154516    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.160962    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:43.160962    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Audit-Id: 79a57e35-ac7e-4f5f-b65b-60d3913b5cc8
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.160962    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.160962    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.160962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:43.648921    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:43.648921    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.648921    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.648921    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.652517    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:43.652517    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.652517    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.652517    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.653538    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.653538    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.653538    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.653538    9752 round_trippers.go:580]     Audit-Id: 3f66c874-2fa3-43e7-a773-d4fc95779033
	I0603 14:51:43.653778    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:43.654648    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:43.654720    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.654720    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.654720    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.656881    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:43.656881    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.656881    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.656881    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.657348    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.657348    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.657348    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.657348    9752 round_trippers.go:580]     Audit-Id: 9ca51f00-d57e-47ad-a79c-aff8c9fed510
	I0603 14:51:43.657348    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.148819    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:44.148819    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.148819    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.148819    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.152529    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:44.152944    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.152944    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.152944    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Audit-Id: 622a3778-a063-4eb1-944f-fda8b28c0893
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.152944    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:44.153932    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:44.153932    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.153932    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.153932    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.156525    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:44.156898    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Audit-Id: 328006eb-b542-42af-96e2-c20e0fbd062d
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.156898    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.156898    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.157001    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.653062    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:44.653062    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.653062    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.653062    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.655872    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:44.655872    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Audit-Id: f0dc7fd0-5844-4ae2-bd7f-2be61b390952
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.655872    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.655872    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.656865    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:44.657871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:44.658876    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.658876    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.658876    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.660868    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:44.661876    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.661947    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Audit-Id: c2925120-9bce-4829-940a-51adb032f50d
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.661947    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.662387    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.662918    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:45.145061    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:45.145284    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.145329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.145329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.149788    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.149788    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Audit-Id: 09a22687-e4bd-4626-94bd-74eb48db54fe
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.149788    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.149788    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.149788    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:45.150769    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:45.150769    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.150769    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.150769    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.155522    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.155522    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.155522    9752 round_trippers.go:580]     Audit-Id: 42d83b26-67e8-4e04-86af-dc159d2a6a7c
	I0603 14:51:45.155522    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.155635    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.155635    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.155635    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.155635    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.155962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:45.650063    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:45.650255    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.650255    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.650255    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.654990    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.655277    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Audit-Id: fd5a6325-fa62-4ea2-990a-78573cffa89f
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.655277    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.655277    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.655727    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:45.656053    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:45.656053    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.656053    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.656053    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.661722    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:45.661722    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.661722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Audit-Id: ea8fd5f0-d0ad-457d-ae04-ea13e401b8b6
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.661722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.662386    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.143779    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:46.143779    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.143955    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.143955    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.148213    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.148213    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Audit-Id: 40fec9f6-64f8-49df-b38f-8e1048f437c6
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.148213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.148213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.148213    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0603 14:51:46.150494    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.150494    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.151605    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.151731    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.156353    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.156353    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.156353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.156353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Audit-Id: 3292039c-fb70-4251-9078-30ff9b5804c5
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.157173    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.157957    9752 pod_ready.go:92] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.158001    9752 pod_ready.go:81] duration metric: took 25.5199418s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.158100    9752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.158227    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:51:46.158306    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.158306    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.158306    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.163555    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:46.163555    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Audit-Id: b0d3c4dc-1d33-4c77-9d43-e4f8aa732a7a
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.163555    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.163555    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.164117    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1922","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0603 14:51:46.164319    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.164319    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.164319    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.164319    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.167779    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.167779    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.167779    9752 round_trippers.go:580]     Audit-Id: 85afce63-3f0b-48c9-b565-c3e87f6b41a5
	I0603 14:51:46.167779    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.168754    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.168754    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.168754    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.168754    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.169125    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.169559    9752 pod_ready.go:92] pod "etcd-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.169593    9752 pod_ready.go:81] duration metric: took 11.4921ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.169593    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.169731    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-720500
	I0603 14:51:46.169731    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.169731    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.169731    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.173842    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.173842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.174035    9752 round_trippers.go:580]     Audit-Id: ba0a55f5-adda-4d8a-8a83-78e87e186a38
	I0603 14:51:46.174035    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.174096    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.174096    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.174096    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.174096    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.174482    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-720500","namespace":"kube-system","uid":"b27b9256-3c5b-4432-8a9e-ebe5303b88f0","resourceVersion":"1921","creationTimestamp":"2024-06-03T14:50:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.154.20:8443","kubernetes.io/config.hash":"a9aa17bec6c8b90196f8771e2e5c6391","kubernetes.io/config.mirror":"a9aa17bec6c8b90196f8771e2e5c6391","kubernetes.io/config.seen":"2024-06-03T14:50:33.891701119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0603 14:51:46.174970    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.174970    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.174970    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.174970    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.178842    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.178842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Audit-Id: e9e1c594-e8d8-40ca-a592-38a75e8f6844
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.179103    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.179103    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.179142    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.179562    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.180004    9752 pod_ready.go:92] pod "kube-apiserver-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.180036    9752 pod_ready.go:81] duration metric: took 10.4432ms for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.180036    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.180148    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:51:46.180148    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.180148    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.180214    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.184993    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.184993    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Audit-Id: e5b5f9b2-83e2-4ecd-8d32-16a9687f41ed
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.185112    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.185112    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.185683    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"1895","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0603 14:51:46.186449    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.186449    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.186561    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.186561    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.189195    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.189195    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.189195    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.189195    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Audit-Id: c71eb95a-f485-4124-93a1-1a8d60332f39
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.189195    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.190315    9752 pod_ready.go:92] pod "kube-controller-manager-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.190418    9752 pod_ready.go:81] duration metric: took 10.3825ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.190418    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.190576    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:51:46.190604    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.190604    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.190651    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.192948    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.193784    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.193784    9752 round_trippers.go:580]     Audit-Id: a9646396-f5ed-4dd3-b273-572387c0ca82
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.193820    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.193820    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.194130    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"1822","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0603 14:51:46.194863    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.194913    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.194913    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.194942    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.197711    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.198044    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.198102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.198102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.198179    9752 round_trippers.go:580]     Audit-Id: 9e308880-46f9-4d35-9c6b-5bc2a1e05f62
	I0603 14:51:46.198420    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.198906    9752 pod_ready.go:92] pod "kube-proxy-64l9x" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.198906    9752 pod_ready.go:81] duration metric: took 8.4376ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.198952    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.355306    9752 request.go:629] Waited for 156.1037ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:51:46.355497    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:51:46.355497    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.355497    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.355497    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.360270    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.360395    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.360395    9752 round_trippers.go:580]     Audit-Id: 29fca7d7-17f8-4079-ab18-828e0b70fc18
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.360458    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.360458    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.360794    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ctm5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"38069b1b-8ba9-46af-b4e7-7add5d9c67fc","resourceVersion":"1761","creationTimestamp":"2024-06-03T14:35:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:51:46.556036    9752 request.go:629] Waited for 194.6279ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:51:46.556198    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:51:46.556198    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.556198    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.556198    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.560242    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.560340    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Audit-Id: 4ca8984b-d722-40a4-9174-94b0ce70bc9b
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.560340    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.560340    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.560632    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m03","uid":"daf03ea9-c0d0-4565-9ad8-44cd4fce8e19","resourceVersion":"1970","creationTimestamp":"2024-06-03T14:46:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_46_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0603 14:51:46.560789    9752 pod_ready.go:97] node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:51:46.560789    9752 pod_ready.go:81] duration metric: took 361.8334ms for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	E0603 14:51:46.560789    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:51:46.560789    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.757822    9752 request.go:629] Waited for 196.3304ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:51:46.758118    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:51:46.758240    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.758240    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.758240    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.762063    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.762063    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.762063    9752 round_trippers.go:580]     Audit-Id: bb4d5d35-12de-46d3-8273-2f23908ac552
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.762147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.762147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.762203    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sm9rr","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f0321c0-f47d-463e-bda2-919f37735748","resourceVersion":"1786","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:51:46.945454    9752 request.go:629] Waited for 182.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:51:46.945555    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:51:46.945748    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.945748    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.945748    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.949377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.950155    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Audit-Id: 3a16cf6b-1037-4663-873c-2ae7d060f122
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.950155    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.950155    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.950535    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"1974","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0603 14:51:46.950662    9752 pod_ready.go:97] node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:51:46.950662    9752 pod_ready.go:81] duration metric: took 389.8701ms for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	E0603 14:51:46.950662    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:51:46.950662    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:47.147455    9752 request.go:629] Waited for 195.9967ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:51:47.147455    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:51:47.147455    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:47.147455    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:47.147699    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:47.151484    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:47.151836    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Audit-Id: a2cd29d3-5dc1-4a57-bc2c-88c9819db781
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:47.151836    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:47.151836    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:47 GMT
	I0603 14:51:47.151997    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"1826","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0603 14:51:47.350883    9752 request.go:629] Waited for 198.4263ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:47.350950    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:47.351036    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:47.351036    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:47.351036    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:47.353781    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:47.354681    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Audit-Id: 37c5c085-959b-46dc-8592-739e003d4822
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:47.354681    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:47.354681    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:47.354770    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:47 GMT
	I0603 14:51:47.354897    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:47.355424    9752 pod_ready.go:92] pod "kube-scheduler-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:47.355536    9752 pod_ready.go:81] duration metric: took 404.8703ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:47.355567    9752 pod_ready.go:38] duration metric: took 26.7310044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:51:47.355567    9752 api_server.go:52] waiting for apiserver process to appear ...
	I0603 14:51:47.366477    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:47.390112    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:47.390231    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:47.402409    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:47.433941    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:47.433941    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:47.450044    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:47.477890    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:47.477890    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:47.478605    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:47.486174    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:47.513873    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:47.513873    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:47.513873    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:47.523710    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:47.543960    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:47.543960    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:47.545185    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:47.554161    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:47.578871    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:47.578871    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:47.578871    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:47.588695    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:47.611240    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:47.611240    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:47.611817    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:47.611874    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:47.611874    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:47.633367    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:47.633606    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:47.633681    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:47.633815    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:47.633841    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:47.635804    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:47.635804    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:47.664976    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:47.665203    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:47.665324    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.665324    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:47.665449    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:47.665449    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:47.665525    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:47.665593    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:47.665593    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:47.665655    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665700    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:47.665700    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665767    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665809    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665858    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:47.665875    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665875    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.665875    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:47.665965    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665992    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:47.665992    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:47.666028    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:47.666028    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:47.666028    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:47.666085    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:47.666085    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:47.666107    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666683    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:47.666779    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:47.667023    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:47.667045    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:47.667045    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:47.667671    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:47.667671    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:47.667794    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:47.667820    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:47.667856    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:47.667856    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:47.667918    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:47.667918    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:47.667957    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:47.667957    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:47.674761    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:47.675398    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:47.707397    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:47.707397    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:47.731404    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:47.734017    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:47.735012    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.801018    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:47.801018    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:47.861010    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:47.861010    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:47.861010    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:47.861010    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:47.861010    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:47.861010    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:47.861010    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:47.861010    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:47.861010    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:47.862025    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:47.862025    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:47.862025    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:47.862025    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:47.864009    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:47.864009    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:47.892099    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:47.943732    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:47.943732    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:47.980438    9752 command_runner.go:130] > .:53
	I0603 14:51:47.980438    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:47.980438    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:47.980438    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:47.980438    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:47.981455    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:47.981455    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:48.007829    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:48.007829    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:48.041752    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.041752    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.041943    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:48.042041    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:48.042747    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.042835    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:48.042857    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:48.042857    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:48.042902    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:48.042902    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:48.042924    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.042924    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:48.043127    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:48.043224    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:48.043974    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:48.044044    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:48.044066    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.045755    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:48.046476    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:48.046476    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:48.047509    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:48.047509    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:48.047686    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:48.048016    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:48.048040    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:48.048040    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.048795    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.048869    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.069847    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:48.069847    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:48.317783    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:48.317849    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:48.317849    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:48.318084    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:48.318105    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.318105    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:48.318127    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:48.318158    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.318158    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.318158    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.318158    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:48.318158    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:48.318158    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.318158    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:48.318158    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.318158    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:40 +0000
	I0603 14:51:48.318158    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:48.318158    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:48.318158    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:48.318158    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:48.318158    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:48.318158    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.318158    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.318158    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.318158    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.318158    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:48.318158    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.318158    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.318702    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.318702    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.318702    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:48.318762    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:48.318762    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:48.318762    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.318762    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.318762    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         69s
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0603 14:51:48.318922    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	I0603 14:51:48.318945    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.318974    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:48.318974    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:48.318974    9752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 14:51:48.318974    9752 command_runner.go:130] > Events:
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:48.318974    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:48.319527    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:48.319527    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:48.319527    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:48.319685    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:48.319685    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.319788    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.319811    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.319839    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.319839    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:48.319839    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:48.319839    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:48.319839    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.319839    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:48.319839    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.319839    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:48.319839    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:48.319839    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:48.319839    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.319839    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.319839    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.319839    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.319839    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:48.319839    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:48.319839    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.319839    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:48.320371    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0603 14:51:48.320371    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0603 14:51:48.320429    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.320429    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:48.320429    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:48.320429    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:48.320548    9752 command_runner.go:130] > Events:
	I0603 14:51:48.320548    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:48.320548    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:48.320548    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:48.320616    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:48.320641    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeNotReady             3m41s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:48.320671    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:48.320671    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.320671    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.320671    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:48.320671    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:48.320671    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.320671    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.320671    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:48.320671    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:48.320671    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:48.320671    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.321203    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.321203    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.321203    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.321260    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.321260    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.321260    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.321260    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.321381    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:48.321381    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.321381    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.321456    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:48.321523    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:48.321538    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.321554    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 14:51:48.321554    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 14:51:48.321554    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.321554    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:48.321554    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:48.321554    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:48.321554    9752 command_runner.go:130] > Events:
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeNotReady             4m1s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:48.331127    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:48.331127    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:48.370757    9752 command_runner.go:130] > .:53
	I0603 14:51:48.370757    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:48.370899    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:48.370899    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:48.370970    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:48.370970    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:48.371100    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:48.371100    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:48.371148    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:48.371148    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:48.371223    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:48.371246    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:48.374748    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:48.374804    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:48.402994    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.403181    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:48.403181    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.403379    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:48.403459    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:48.403609    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:48.403634    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.403663    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.403663    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.406232    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:48.406232    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.438460    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.438460    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438511    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438552    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438552    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439158    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439377    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439377    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439472    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.439493    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.439531    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.439570    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:48.450559    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:48.450559    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:48.479513    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.479513    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:48.479665    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.479729    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:48.479729    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:48.479729    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:48.479791    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:48.479813    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:48.479978    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:48.480002    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:48.480027    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:48.480027    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:48.480166    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:48.480815    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:48.480815    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:48.480870    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:48.480930    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:48.480968    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:48.481773    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:48.481807    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:48.481807    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:48.481842    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:48.482389    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:48.482389    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:48.482609    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:48.483259    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.483415    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.483415    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:48.483852    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:48.483938    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:48.483990    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:48.484024    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.484024    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:48.484061    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:48.484061    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:48.499438    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:48.499438    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:48.525450    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:48.525450    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:48.525450    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:48.526146    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.526962    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:48.527032    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527032    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527097    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.530231    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:48.530231    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.129614    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:51:51.160093    9752 command_runner.go:130] > 1877
	I0603 14:51:51.160219    9752 api_server.go:72] duration metric: took 1m7.3707328s to wait for apiserver process to appear ...
	I0603 14:51:51.160324    9752 api_server.go:88] waiting for apiserver healthz status ...
	I0603 14:51:51.170922    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:51.193114    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:51.193114    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:51.203521    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:51.224331    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:51.225818    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:51.235814    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:51.256783    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:51.257489    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:51.258733    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:51.268752    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:51.289154    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:51.290275    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:51.290327    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:51.299288    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:51.328267    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:51.328799    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:51.328924    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:51.339766    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:51.364195    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:51.364195    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:51.364195    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:51.374860    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:51.398493    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:51.398493    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:51.398493    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:51.398493    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:51.398493    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:51.422342    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:51.423387    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:51.423689    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:51.423746    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.423920    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:51.423920    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.423954    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:51.423954    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.423975    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.423975    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:51.424055    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.424055    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.424186    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424186    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.429915    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:51.429915    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:51.460186    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460186    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:51.461764    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:51.461764    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:51.461822    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:51.462524    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462558    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:51.462949    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:51.462949    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:51.463106    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:51.463106    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:51.463263    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:51.463263    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:51.463364    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463364    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463448    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463448    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463529    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463822    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463822    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464148    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464148    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464226    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464226    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:51.464910    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:51.464910    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:51.464988    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:51.464988    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:51.465065    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.465065    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.465144    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.465144    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:51.465222    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:51.465222    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465317    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465317    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:51.465417    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:51.465417    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465499    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.465582    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.465582    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465802    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465874    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:51.465874    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466041    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466145    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:51.466229    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:51.466265    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:51.469208    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469229    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469229    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:51.519292    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:51.519292    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:51.721796    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:51.721796    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:51.721796    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.721796    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:51.721796    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:51.721796    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.721796    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.721796    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:51 +0000
	I0603 14:51:51.721796    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:51.721796    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:51.721796    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:51.721796    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:51.721796    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.721796    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.721796    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.721796    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.721796    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:51.721796    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.721796    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.722750    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:51.722750    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:51.722750    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.722750    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:51.722750    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 14:51:51.722750    9752 command_runner.go:130] > Events:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:51.722750    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:51.722750    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.722750    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.722750    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:51.722750    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:51.722750    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.722750    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.722750    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:51.722750    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:51.723801    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:51.723801    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.723801    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.723801    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.723801    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.723801    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.723801    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.723961    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.723961    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:51.723961    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.723961    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.724046    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:51.724046    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:51.724046    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:51.724125    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.724125    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.724125    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:51.724125    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0603 14:51:51.724125    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0603 14:51:51.724125    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.724203    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:51.724203    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:51.724203    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:51.724281    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:51.724281    9752 command_runner.go:130] > Events:
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:51.724281    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:51.724458    9752 command_runner.go:130] >   Normal  NodeNotReady             3m44s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:51.724458    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:51.724458    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:51.724458    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:51.724458    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.724458    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.724622    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.724622    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:51.724622    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:51.724622    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.724622    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.724732    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:51.724732    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.724732    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:51.724732    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.724732    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:51.724808    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:51.724808    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724885    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.724885    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:51.724885    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:51.724885    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.724885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.724885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.724885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.724885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.724885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.724963    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.724963    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.724963    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.724963    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.724963    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.724963    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.724963    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.724963    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:51.724963    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.725038    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.725115    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:51.725115    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:51.725115    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:51.725115    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.725115    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.725115    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 14:51:51.725115    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 14:51:51.725192    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.725192    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.725192    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:51.725192    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:51.725279    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:51.725279    9752 command_runner.go:130] > Events:
	I0603 14:51:51.725279    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:51.725279    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:51.725279    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  NodeReady                5m40s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  NodeNotReady             4m4s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:51.734742    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:51.734742    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:51.763787    9752 command_runner.go:130] > .:53
	I0603 14:51:51.764082    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:51.764082    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:51.764082    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:51.764159    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:51.764159    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:51.764196    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:51.764196    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:51.764234    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:51.764234    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:51.764408    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:51.764408    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:51.764465    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:51.764819    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:51.764941    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:51.764941    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:51.768025    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:51.768133    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:51.811741    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:51.812148    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:51.812148    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:51.812203    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:51.812203    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:51.812276    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:51.812301    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:51.812479    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:51.812479    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:51.812527    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:51.812527    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:51.814151    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:51.814151    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:51.841509    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:51.841509    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:51.841609    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.841742    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:51.842311    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:51.842311    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:51.843025    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:51.843188    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:51.843267    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:51.843301    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:51.843425    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:51.843467    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:51.843467    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:51.843536    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:51.843536    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:51.843634    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:51.843934    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:51.844019    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:51.844019    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:51.844108    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:51.844827    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:51.845841    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:51.845841    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:51.846011    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:51.846011    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:51.846066    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:51.846066    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:51.846122    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:51.846122    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:51.846170    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:51.846170    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:51.846270    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846270    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:51.846314    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:51.846314    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:51.846363    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:51.847173    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:51.847173    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:51.847204    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:51.847230    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:51.847258    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:51.847258    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:51.847287    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:51.847287    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:51.847862    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:51.847862    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:51.847973    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:51.847973    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:51.848213    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:51.848213    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:51.848266    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:51.848289    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:51.848289    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:51.848316    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:51.848316    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:51.848387    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:51.848387    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:51.848482    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:51.848522    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:51.848522    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:51.848576    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:51.848576    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:51.848616    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:51.848616    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:51.848701    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:51.848790    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:51.848790    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:51.848828    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:51.848828    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:51.848879    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:51.848967    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849006    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:51.849006    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:51.849143    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849247    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849284    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:51.849284    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:51.849333    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849333    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.868896    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:51.868896    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:51.901316    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.901906    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902050    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902115    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902183    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902183    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:51.902261    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902261    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.902323    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.902402    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.902402    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.902470    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902549    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902549    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902623    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902623    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.902781    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.902844    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.902844    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902925    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902983    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902983    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903047    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:51.903047    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903122    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:51.903122    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.903182    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903182    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:51.903243    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:51.903302    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:51.903302    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:51.903383    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:51.903447    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:51.903508    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:51.903561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:51.903624    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903709    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.903771    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903826    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.903887    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903950    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:51.904010    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904065    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904065    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904186    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.904244    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904300    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.904381    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:51.904443    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:51.904503    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:51.904566    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:51.904627    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:51.904719    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:51.904777    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:51.904829    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:51.904829    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:51.904888    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:51.904948    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.905006    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.905006    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:51.905059    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:51.905117    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905176    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905235    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905288    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905288    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905364    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905426    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905484    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905547    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905606    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905708    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905767    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905819    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905877    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905935    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905991    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906049    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906106    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906164    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906220    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906272    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906330    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:51.906409    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906468    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906530    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:51.906590    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:51.906669    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:51.906747    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:51.906803    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:51.906882    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906937    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:51.906998    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:51.907054    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:51.907115    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:51.907115    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:51.907195    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:51.907278    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:51.907278    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:51.907331    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:51.907410    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:51.907465    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:51.907650    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:51.907650    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:51.907715    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:51.907931    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:51.907931    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:51.908039    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:51.908077    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:51.908121    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:51.908185    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:51.908185    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:51.908261    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:51.908261    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:51.908323    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:51.908402    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:51.908517    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:51.908664    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:51.908734    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.908801    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.908866    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.908938    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.908987    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909049    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:51.909126    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909176    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909176    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909276    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.909317    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909443    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:51.910729    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:51.910729    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911422    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.914116    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.942671    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:51.942671    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:51.966055    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:51.966055    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:51.966647    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:51.966647    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:51.966798    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:51.966888    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:51.966888    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:51.968819    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:51.968864    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:51.996037    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:51.996037    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:51.999144    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:51.999207    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.027602    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.027670    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.028565    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.028565    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.029163    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.029163    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.029295    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:52.029295    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:52.039727    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:52.039727    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:52.069011    9752 command_runner.go:130] > .:53
	I0603 14:51:52.069121    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:52.069121    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:52.069121    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:52.069121    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:52.069401    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:52.069401    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:52.097576    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:52.097576    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:52.098038    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.098038    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.098106    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.098106    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:52.098145    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.098259    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:52.098333    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:52.098333    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:52.099565    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:52.099565    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:52.099865    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:52.100181    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:52.101076    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:52.101864    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:52.101864    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:52.102001    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:52.102001    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:52.102056    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:52.102056    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:52.102105    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:52.102105    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:52.102156    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:52.102426    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:52.102426    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:52.103364    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:52.103364    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:52.103423    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:52.103423    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:52.103476    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:52.103476    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:52.103569    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:52.103569    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:52.103609    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:52.103656    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:52.103656    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:52.103695    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:52.103695    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:52.103748    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:52.103748    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:52.103793    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:52.103793    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:52.103846    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:52.103846    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:52.103941    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:52.103941    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:52.104711    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:52.104711    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:52.104838    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:52.104883    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:52.104883    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:52.105398    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:52.105398    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:52.105442    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:52.105442    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:52.105545    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:52.105575    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:52.121042    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:52.121042    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:52.148865    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:52.148865    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149662    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149662    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:52.149799    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149799    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.168486    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.168486    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.168874    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.168874    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.168975    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:52.168975    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.169089    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.171864    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172262    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172825    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172825    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:52.172863    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172898    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172898    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172920    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173477    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173477    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173775    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173775    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173863    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173863    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173988    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:52.173988    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174032    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174032    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174208    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174208    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:52.174905    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174905    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175073    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:52.175267    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175267    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175421    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175421    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175465    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175465    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:52.175507    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175507    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175543    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175543    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175575    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176133    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176133    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176307    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176307    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176599    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176599    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176705    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176705    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176794    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176794    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:52.176969    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176969    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177144    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177144    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177182    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177182    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177271    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177271    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177497    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177497    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:52.177701    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178088    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178129    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178129    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178258    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178258    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178441    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178463    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.195800    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:52.195800    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:52.264900    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:52.265005    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:52.265005    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:52.265080    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:52.265080    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:52.265080    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:52.265080    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:52.265174    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:52.265174    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:52.265253    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:52.265357    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:52.265357    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:52.265357    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:52.265463    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:52.268129    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:52.268159    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:52.297077    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:52.297330    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:52.297425    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297508    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:52.297508    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297557    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:52.297557    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:52.297557    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:52.297625    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:52.297656    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:52.297656    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:52.297656    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:52.297712    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297712    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297754    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:52.297754    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297754    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297824    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:52.297864    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:52.297864    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297916    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297955    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:52.297955    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298008    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298008    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:52.298008    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:52.298049    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:52.298049    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298102    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298144    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:52.298144    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298144    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298219    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:52.298219    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298260    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298260    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:52.298260    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:52.298309    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:52.298351    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298351    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:52.298351    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:52.298651    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:52.298651    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:52.299217    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:52.299217    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:52.299370    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:52.299412    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:52.299461    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:52.299615    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:52.299615    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:52.299703    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:52.299703    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:52.299728    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:52.299728    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:52.299872    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:52.306817    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:52.306817    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:52.332623    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:52.333446    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:52.333482    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:52.334963    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:52.334963    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:52.335476    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:52.335476    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:52.335541    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:52.335541    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:52.335775    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:52.335815    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:52.335869    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:52.335869    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:52.335910    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:52.335950    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:52.335950    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:52.336073    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:52.346842    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:52.346842    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:52.373041    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:52.373169    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:52.373208    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:52.373220    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:54.892527    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:51:54.902158    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:51:54.902514    9752 round_trippers.go:463] GET https://172.22.154.20:8443/version
	I0603 14:51:54.902514    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:54.902514    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:54.902514    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:54.904079    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:54.904079    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:54.904540    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Content-Length: 263
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:54 GMT
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Audit-Id: 005c12dc-db55-4252-ac7c-42d0ce099d4f
	I0603 14:51:54.904578    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:54.904578    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:54.904578    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:54.904578    9752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 14:51:54.904696    9752 api_server.go:141] control plane version: v1.30.1
	I0603 14:51:54.904696    9752 api_server.go:131] duration metric: took 3.7443414s to wait for apiserver health ...
	I0603 14:51:54.904696    9752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 14:51:54.914519    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:54.937851    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:54.937851    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:54.947497    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:54.968500    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:54.969516    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:54.978496    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:54.998504    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:54.999520    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:54.999520    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:55.007494    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:55.028453    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:55.028491    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:55.028491    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:55.038409    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:55.063604    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:55.063684    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:55.063752    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:55.073790    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:55.097161    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:55.097161    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:55.097161    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:55.106155    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:55.129204    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:55.129204    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:55.130305    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:55.130505    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:55.130505    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:55.157343    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.158685    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158685    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158685    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158798    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158888    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158949    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158973    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.158973    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.159036    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159036    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159130    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159130    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159183    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.159183    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159965    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160393    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.160393    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:55.171362    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:55.171362    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:55.199327    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:55.200357    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:55.200379    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:55.201325    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.202278    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.202278    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:55.202317    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:55.202317    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:55.202373    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:55.202373    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:55.202411    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.202411    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.202441    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:55.203009    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:55.203009    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:55.203110    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:55.203145    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:55.203176    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:55.203350    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:55.203350    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:55.203452    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:55.203452    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:55.203552    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:55.203552    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:55.203634    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:55.203634    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:55.203709    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:55.203709    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:55.203748    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:55.203815    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:55.203815    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:55.203882    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:55.204036    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:55.204036    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:55.204217    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:55.204280    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:55.204375    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:55.204399    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:55.205010    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:55.205010    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:55.205056    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:55.205056    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:55.205103    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:55.205126    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:55.205126    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:55.205199    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:55.205227    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:55.205247    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:55.205278    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:55.205306    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205418    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:55.205418    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:55.205478    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:55.205478    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205645    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.205669    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.205713    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:55.205740    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:55.205740    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:55.205829    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:55.205829    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:55.206450    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.206450    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:55.206512    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:55.206512    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:55.206602    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:55.206698    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:55.207006    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:55.207559    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:55.207559    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208408    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.208408    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:55.208553    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:55.208553    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208779    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.226914    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:55.226914    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:55.259764    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:55.259826    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:55.259826    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:55.259893    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:55.259924    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:55.259924    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.259924    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:55.259984    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:55.259984    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:55.260007    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:55.260007    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:55.260061    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260084    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260084    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:55.260084    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260152    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:55.260174    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:55.260174    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:55.260174    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:55.260225    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:55.260225    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:55.260247    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:55.260247    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260297    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260297    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:55.260320    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260320    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.260971    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:55.260971    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:55.261143    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:55.261165    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:55.261165    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:55.261190    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:55.261216    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:55.261423    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:55.261525    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:55.261607    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:55.261607    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:55.261688    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:55.261730    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:55.261730    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:55.261818    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:55.261896    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:55.261973    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:55.261973    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:55.261973    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:55.262030    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:55.262030    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:55.262119    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:55.268359    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:55.268359    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:55.293428    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:55.294477    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:55.294502    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:55.301018    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:55.301018    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:55.326653    9752 command_runner.go:130] > .:53
	I0603 14:51:55.326723    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:55.326792    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:55.326792    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:55.326900    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:55.327522    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:55.330284    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:55.330284    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:55.360500    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:55.360500    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:55.360500    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:55.360596    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:55.360683    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360683    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707609       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707654       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707666       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707672       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.708072       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.708115       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.363849    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:55.363849    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:55.434511    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:55.434584    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:55.434584    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:55.434584    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:55.434584    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:55.434584    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:55.434584    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:55.434584    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:55.434584    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:55.434584    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:55.434584    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:55.434584    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:55.437050    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:55.437050    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:55.464262    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:55.464262    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.464976    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.464976    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:55.465288    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.465288    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.466353    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:55.466353    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.467344    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.467748    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469431    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469507    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469704    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469704    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469874    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469899    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470458    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470458    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470676    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:55.471328    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.471429    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472310    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472874    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473057    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:55.473080    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473080    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473689    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473886    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:55.473908    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474830    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474854    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475678    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475678    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:55.475703    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476479    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476503    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.477068    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.477116    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:55.477116    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.477170    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.477170    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.477196    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.477196    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.494348    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:55.494348    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:55.519414    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:55.519414    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:55.521353    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:55.521353    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:55.733540    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:55.733660    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:55.733660    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.733660    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.733660    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.733806    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:55.733806    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:55.733885    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.733885    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:55.733885    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:55.733885    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.733885    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:55.733885    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.733885    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:51 +0000
	I0603 14:51:55.733885    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:55.733885    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:55.733885    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:55.733885    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:55.733885    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:55.733885    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.733885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.733885    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.733885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.733885    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:55.733885    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.733885    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.733885    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:55.733885    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:55.733885    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.734418    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0603 14:51:55.734521    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0603 14:51:55.734550    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:55.734550    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:55.734610    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:55.734634    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0603 14:51:55.734634    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.734634    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.734634    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:55.734634    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:55.734690    9752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0603 14:51:55.734690    9752 command_runner.go:130] > Events:
	I0603 14:51:55.734690    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:55.734752    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:55.734752    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:55.734752    9752 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0603 14:51:55.734813    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:55.734837    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.734837    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.734890    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.734972    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.734972    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:55.735054    9752 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0603 14:51:55.735054    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:55.735118    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:55.735186    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:55.735186    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.735313    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.735313    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:55.735399    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:55.735399    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:55.735456    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.735456    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.735456    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:55.735485    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.735485    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:55.735485    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:55.735485    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:55.735485    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.735485    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.735485    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.735485    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.735485    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:55.735485    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:55.735485    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.735485    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0603 14:51:55.735485    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0603 14:51:55.735485    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0603 14:51:55.735485    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:55.735485    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:55.736011    9752 command_runner.go:130] > Events:
	I0603 14:51:55.736011    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:55.736056    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeNotReady             3m48s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:55.736089    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:55.736089    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.736089    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.736089    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:55.736089    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:55.736089    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.736089    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.736089    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:55.736089    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:55.736089    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:55.736089    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:55.736630    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:55.736630    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.736630    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.736691    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.736691    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.736691    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.736691    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.736748    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.736748    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.736748    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.736811    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.736811    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.736834    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.736834    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:55.736834    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.736834    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.736933    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.736951    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:55.736951    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:55.736951    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:55.737039    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.737065    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.737091    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0603 14:51:55.737091    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0603 14:51:55.737091    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.737121    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.737148    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:55.737194    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:55.737217    9752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0603 14:51:55.737236    9752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0603 14:51:55.737285    9752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0603 14:51:55.737285    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0603 14:51:55.737319    9752 command_runner.go:130] > Events:
	I0603 14:51:55.737319    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:55.737341    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  Starting                 5m47s                  kube-proxy       
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeReady                5m44s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeNotReady             4m8s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:55.746835    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:55.746835    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:55.774878    9752 command_runner.go:130] > .:53
	I0603 14:51:55.774956    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:55.774956    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:55.774956    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:55.774956    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:55.774956    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:55.774956    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:55.798606    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.799927    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:55.799927    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.800013    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:55.800013    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.800183    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.800183    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.802826    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:55.802878    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:55.838656    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:55.839342    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:55.839342    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:55.839342    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:55.839397    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:55.839651    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:55.839651    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:55.840663    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:55.840663    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:55.841396    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:55.841396    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:55.841545    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:55.841571    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:55.841571    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:55.841617    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:55.841661    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:55.841661    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.841852    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:55.841879    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:55.841936    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:55.841968    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:55.841968    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:55.841996    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:55.842896    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:55.843194    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:55.843439    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:55.843439    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.843464    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:55.844088    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:55.844088    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:55.844259    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:55.862777    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:55.863779    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:55.946760    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:55.946760    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:55.978561    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:55.978630    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:55.978808    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:55.979154    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:55.979754    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.980763    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:55.981553    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:55.981628    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:55.981694    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:55.981728    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:55.981794    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:55.983901    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:55.983901    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:56.009504    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:56.009504    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:56.010154    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:56.010208    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:56.010231    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:56.012642    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:56.012642    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:56 multinode-720500 dockerd[1054]: 2024/06/03 14:51:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:58.597456    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:58.597456    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.597456    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.597456    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.602881    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:58.603886    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Audit-Id: 2adbffed-296b-4ad2-802f-cba40c2a9b63
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.603886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.603886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.604193    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86572 chars]
	I0603 14:51:58.608947    9752 system_pods.go:59] 12 kube-system pods found
	I0603 14:51:58.608947    9752 system_pods.go:61] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-apiserver-multinode-720500" [b27b9256-3c5b-4432-8a9e-ebe5303b88f0] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:51:58.608947    9752 system_pods.go:74] duration metric: took 3.7042213s to wait for pod list to return data ...
	I0603 14:51:58.608947    9752 default_sa.go:34] waiting for default service account to be created ...
	I0603 14:51:58.608947    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/default/serviceaccounts
	I0603 14:51:58.609930    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.609967    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.609967    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.612887    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:58.612887    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.612887    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.612887    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Content-Length: 262
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Audit-Id: 393e5682-f954-4ea9-b887-c1f2e4a42b19
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.612887    9752 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fbd8badf-59ec-4931-b3bf-13e96cb86c7b","resourceVersion":"352","creationTimestamp":"2024-06-03T14:27:32Z"}}]}
	I0603 14:51:58.613347    9752 default_sa.go:45] found service account: "default"
	I0603 14:51:58.613347    9752 default_sa.go:55] duration metric: took 4.4004ms for default service account to be created ...
	I0603 14:51:58.613347    9752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 14:51:58.613347    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:58.613347    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.613347    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.613347    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.617942    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:58.617942    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.617942    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.617942    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Audit-Id: d88e58c6-926c-4a33-a21c-d625a32ba7cc
	I0603 14:51:58.619012    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86572 chars]
	I0603 14:51:58.622343    9752 system_pods.go:86] 12 kube-system pods found
	I0603 14:51:58.622343    9752 system_pods.go:89] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-apiserver-multinode-720500" [b27b9256-3c5b-4432-8a9e-ebe5303b88f0] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:51:58.622343    9752 system_pods.go:126] duration metric: took 8.9956ms to wait for k8s-apps to be running ...
	I0603 14:51:58.622343    9752 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 14:51:58.635441    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:51:58.660474    9752 system_svc.go:56] duration metric: took 38.1304ms WaitForService to wait for kubelet
	I0603 14:51:58.660474    9752 kubeadm.go:576] duration metric: took 1m14.8709263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:51:58.660474    9752 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:51:58.660650    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes
	I0603 14:51:58.660709    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.660709    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.660709    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.664465    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:58.664465    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.664465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.664465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Audit-Id: 9b0c43f2-4a5a-4b3f-bdf5-ddc7fe069877
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.664702    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.664799    9752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0603 14:51:58.666485    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666485    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666599    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666599    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:105] duration metric: took 6.1255ms to run NodePressure ...
	I0603 14:51:58.666599    9752 start.go:240] waiting for startup goroutines ...
	I0603 14:51:58.666703    9752 start.go:245] waiting for cluster config update ...
	I0603 14:51:58.666703    9752 start.go:254] writing updated cluster config ...
	I0603 14:51:58.671202    9752 out.go:177] 
	I0603 14:51:58.678379    9752 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:51:58.687586    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:51:58.687586    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:51:58.696211    9752 out.go:177] * Starting "multinode-720500-m02" worker node in "multinode-720500" cluster
	I0603 14:51:58.699076    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:51:58.699076    9752 cache.go:56] Caching tarball of preloaded images
	I0603 14:51:58.699076    9752 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:51:58.699076    9752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:51:58.699076    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:51:58.701385    9752 start.go:360] acquireMachinesLock for multinode-720500-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:51:58.701385    9752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-720500-m02"
	I0603 14:51:58.701385    9752 start.go:96] Skipping create...Using existing machine configuration
	I0603 14:51:58.701385    9752 fix.go:54] fixHost starting: m02
	I0603 14:51:58.702494    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:00.924858    9752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 14:52:00.925086    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:00.925086    9752 fix.go:112] recreateIfNeeded on multinode-720500-m02: state=Stopped err=<nil>
	W0603 14:52:00.925086    9752 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 14:52:00.928550    9752 out.go:177] * Restarting existing hyperv VM for "multinode-720500-m02" ...
	I0603 14:52:00.936694    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500-m02
	I0603 14:52:04.044966    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:04.044966    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:04.047795    9752 main.go:141] libmachine: Waiting for host to start...
	I0603 14:52:04.048214    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:08.970262    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:08.970473    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:09.984158    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:12.243565    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:12.243565    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:12.243730    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:14.815718    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:14.815750    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:15.830056    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:18.058768    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:18.059688    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:18.059746    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:20.661241    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:20.662221    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:21.665405    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:23.930478    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:23.930537    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:23.930758    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:26.539332    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:26.539332    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:27.553638    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:29.836618    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:29.837446    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:29.837446    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:32.494172    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:32.494212    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:32.496860    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:34.681231    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:34.681231    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:34.681644    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:37.292848    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:37.292848    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:37.293061    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:52:37.296274    9752 machine.go:94] provisionDockerMachine start ...
	I0603 14:52:37.296274    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:39.478747    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:39.478747    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:39.478883    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:42.059546    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:42.059546    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:42.065913    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:42.065979    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:42.065979    9752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:52:42.190260    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:52:42.190260    9752 buildroot.go:166] provisioning hostname "multinode-720500-m02"
	I0603 14:52:42.190406    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:44.334883    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:44.335874    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:44.335874    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:46.891967    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:46.891967    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:46.898239    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:46.899050    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:46.899050    9752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500-m02 && echo "multinode-720500-m02" | sudo tee /etc/hostname
	I0603 14:52:47.052489    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500-m02
	
	I0603 14:52:47.052489    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:51.772847    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:51.773530    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:51.779804    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:51.780383    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:51.780383    9752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:52:51.921583    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:52:51.921583    9752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:52:51.921583    9752 buildroot.go:174] setting up certificates
	I0603 14:52:51.921583    9752 provision.go:84] configureAuth start
	I0603 14:52:51.921583    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:56.615390    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:56.615390    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:56.616263    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:58.751125    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:58.751125    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:58.751996    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:01.338340    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:01.338340    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:01.338340    9752 provision.go:143] copyHostCerts
	I0603 14:53:01.339387    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:53:01.339943    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:53:01.339943    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:53:01.340326    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:53:01.341593    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:53:01.341799    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:53:01.341913    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:53:01.342344    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:53:01.343448    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:53:01.343724    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:53:01.343867    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:53:01.344149    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:53:01.345161    9752 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500-m02 san=[127.0.0.1 172.22.149.253 localhost minikube multinode-720500-m02]
	I0603 14:53:01.434282    9752 provision.go:177] copyRemoteCerts
	I0603 14:53:01.449343    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:53:01.449343    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:03.627583    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:03.627583    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:03.628011    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:06.208381    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:06.208381    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:06.208381    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:06.306002    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8566202s)
	I0603 14:53:06.306002    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:53:06.306002    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:53:06.354488    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:53:06.354898    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 14:53:06.405399    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:53:06.405399    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 14:53:06.461413    9752 provision.go:87] duration metric: took 14.5397128s to configureAuth
	I0603 14:53:06.461413    9752 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:53:06.462466    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:06.462634    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:08.674379    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:08.675292    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:08.675292    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:11.235594    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:11.235594    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:11.241616    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:11.241742    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:11.241742    9752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:53:11.364326    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:53:11.364403    9752 buildroot.go:70] root file system type: tmpfs
	I0603 14:53:11.364662    9752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:53:11.364773    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:13.498365    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:13.498464    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:13.498464    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:16.042981    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:16.042981    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:16.049551    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:16.050096    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:16.050096    9752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.154.20"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:53:16.201264    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.154.20
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:53:16.201264    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:18.376783    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:18.378078    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:18.378153    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:20.959655    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:20.960474    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:20.966200    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:20.966736    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:20.966736    9752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:53:23.264026    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:53:23.264026    9752 machine.go:97] duration metric: took 45.9673791s to provisionDockerMachine
	I0603 14:53:23.264026    9752 start.go:293] postStartSetup for "multinode-720500-m02" (driver="hyperv")
	I0603 14:53:23.264026    9752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:53:23.276578    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:53:23.276578    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:25.434580    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:25.435367    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:25.435367    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:28.003954    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:28.003954    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:28.005193    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:28.118091    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8414732s)
	I0603 14:53:28.130081    9752 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:53:28.139093    9752 command_runner.go:130] > NAME=Buildroot
	I0603 14:53:28.139280    9752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:53:28.139280    9752 command_runner.go:130] > ID=buildroot
	I0603 14:53:28.139280    9752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:53:28.139280    9752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:53:28.139280    9752 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:53:28.139280    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:53:28.139969    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:53:28.140696    9752 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:53:28.140696    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:53:28.155361    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:53:28.176021    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:53:28.220503    9752 start.go:296] duration metric: took 4.9564372s for postStartSetup
	I0603 14:53:28.220724    9752 fix.go:56] duration metric: took 1m29.5186123s for fixHost
	I0603 14:53:28.220856    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:32.957532    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:32.957981    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:32.963744    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:32.964505    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:32.964505    9752 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 14:53:33.093612    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717426413.100380487
	
	I0603 14:53:33.093732    9752 fix.go:216] guest clock: 1717426413.100380487
	I0603 14:53:33.093732    9752 fix.go:229] Guest: 2024-06-03 14:53:33.100380487 +0000 UTC Remote: 2024-06-03 14:53:28.2207248 +0000 UTC m=+299.350066901 (delta=4.879655687s)
	I0603 14:53:33.093850    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:35.201749    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:35.202147    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:35.202147    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:37.790908    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:37.790908    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:37.797165    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:37.797165    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:37.797776    9752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717426413
	I0603 14:53:37.931180    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:53:33 UTC 2024
	
	I0603 14:53:37.931304    9752 fix.go:236] clock set: Mon Jun  3 14:53:33 UTC 2024
	 (err=<nil>)
	I0603 14:53:37.931304    9752 start.go:83] releasing machines lock for "multinode-720500-m02", held for 1m39.2291131s
	I0603 14:53:37.931427    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:40.065225    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:40.065758    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:40.065758    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:42.574215    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:42.575221    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:42.580055    9752 out.go:177] * Found network options:
	I0603 14:53:42.583237    9752 out.go:177]   - NO_PROXY=172.22.154.20
	W0603 14:53:42.584517    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:53:42.587020    9752 out.go:177]   - NO_PROXY=172.22.154.20
	W0603 14:53:42.589996    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 14:53:42.591046    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:53:42.593813    9752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:53:42.593813    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:42.603476    9752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:53:42.603476    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:44.803724    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:44.803818    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:44.803818    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:44.848516    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:44.848516    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:44.848642    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:47.515409    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:47.515409    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:47.515409    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:47.538501    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:47.538501    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:47.539516    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:47.704412    9752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:53:47.704533    9752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1106787s)
	I0603 14:53:47.704592    9752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 14:53:47.704592    9752 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1010745s)
	W0603 14:53:47.704592    9752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:53:47.715437    9752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:53:47.746454    9752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:53:47.747215    9752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:53:47.747215    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:53:47.747461    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:53:47.784875    9752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:53:47.798913    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:53:47.828886    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:53:47.847234    9752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:53:47.860461    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:53:47.891558    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:53:47.923422    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:53:47.954071    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:53:47.989321    9752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:53:48.025299    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:53:48.058121    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:53:48.092417    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:53:48.127212    9752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:53:48.145707    9752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:53:48.158930    9752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:53:48.193873    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:48.393293    9752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:53:48.427243    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:53:48.440210    9752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:53:48.463459    9752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:53:48.463459    9752 command_runner.go:130] > [Unit]
	I0603 14:53:48.463459    9752 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:53:48.463459    9752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:53:48.463459    9752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:53:48.463459    9752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:53:48.463459    9752 command_runner.go:130] > StartLimitBurst=3
	I0603 14:53:48.463459    9752 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:53:48.463459    9752 command_runner.go:130] > [Service]
	I0603 14:53:48.463459    9752 command_runner.go:130] > Type=notify
	I0603 14:53:48.463459    9752 command_runner.go:130] > Restart=on-failure
	I0603 14:53:48.463459    9752 command_runner.go:130] > Environment=NO_PROXY=172.22.154.20
	I0603 14:53:48.463459    9752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:53:48.463459    9752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:53:48.463459    9752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:53:48.463459    9752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:53:48.463459    9752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:53:48.464025    9752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:53:48.464025    9752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:53:48.464025    9752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:53:48.464082    9752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:53:48.464116    9752 command_runner.go:130] > ExecStart=
	I0603 14:53:48.464116    9752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:53:48.464154    9752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:53:48.464195    9752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:53:48.464229    9752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:53:48.464229    9752 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:53:48.464314    9752 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:53:48.464314    9752 command_runner.go:130] > LimitCORE=infinity
	I0603 14:53:48.464337    9752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:53:48.464337    9752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:53:48.464337    9752 command_runner.go:130] > TasksMax=infinity
	I0603 14:53:48.464395    9752 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:53:48.464395    9752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:53:48.464428    9752 command_runner.go:130] > Delegate=yes
	I0603 14:53:48.464458    9752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:53:48.464458    9752 command_runner.go:130] > KillMode=process
	I0603 14:53:48.464458    9752 command_runner.go:130] > [Install]
	I0603 14:53:48.464458    9752 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:53:48.478554    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:53:48.514172    9752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:53:48.565797    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:53:48.602508    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:53:48.642096    9752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:53:48.697682    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:53:48.722494    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:53:48.756161    9752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:53:48.774650    9752 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:53:48.780598    9752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:53:48.791952    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:53:48.809113    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:53:48.853247    9752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:53:49.053457    9752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:53:49.246160    9752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:53:49.246321    9752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:53:49.290669    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:49.487975    9752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:53:52.111216    9752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6231579s)
	I0603 14:53:52.122712    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:53:52.160406    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:53:52.199360    9752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:53:52.417094    9752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:53:52.621731    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:52.841269    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:53:52.883968    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:53:52.920189    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:53.134247    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:53:53.244024    9752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:53:53.256425    9752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:53:53.265046    9752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:53:53.265046    9752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:53:53.265046    9752 command_runner.go:130] > Device: 0,22	Inode: 861         Links: 1
	I0603 14:53:53.265046    9752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:53:53.265046    9752 command_runner.go:130] > Access: 2024-06-03 14:53:53.168530342 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] > Modify: 2024-06-03 14:53:53.168530342 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] > Change: 2024-06-03 14:53:53.172530347 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] >  Birth: -
	I0603 14:53:53.265046    9752 start.go:562] Will wait 60s for crictl version
	I0603 14:53:53.277615    9752 ssh_runner.go:195] Run: which crictl
	I0603 14:53:53.283882    9752 command_runner.go:130] > /usr/bin/crictl
	I0603 14:53:53.296082    9752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:53:53.348127    9752 command_runner.go:130] > Version:  0.1.0
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeName:  docker
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:53:53.349006    9752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:53:53.359180    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:53:53.390758    9752 command_runner.go:130] > 26.0.2
	I0603 14:53:53.401578    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:53:53.430671    9752 command_runner.go:130] > 26.0.2
	I0603 14:53:53.435918    9752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:53:53.438746    9752 out.go:177]   - env NO_PROXY=172.22.154.20
	I0603 14:53:53.443234    9752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:53:53.450614    9752 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:53:53.450614    9752 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:53:53.464382    9752 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:53:53.470729    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:53:53.492517    9752 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:53:53.493207    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:53.493740    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:53:55.642893    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:55.642893    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:55.643442    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:55.644221    9752 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.149.253
	I0603 14:53:55.644265    9752 certs.go:194] generating shared ca certs ...
	I0603 14:53:55.644298    9752 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:53:55.645064    9752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:53:55.645182    9752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:53:55.645744    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:53:55.646664    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:53:55.646664    9752 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:53:55.647866    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:53:55.648584    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:55.649285    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:53:55.702334    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:53:55.752104    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:53:55.798483    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:53:55.845865    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:53:55.890471    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:53:55.933517    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:53:55.991142    9752 ssh_runner.go:195] Run: openssl version
	I0603 14:53:56.000144    9752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:53:56.012460    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:53:56.043637    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.053075    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.053075    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.066794    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.075622    9752 command_runner.go:130] > 51391683
	I0603 14:53:56.088499    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:53:56.120186    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:53:56.157251    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.164755    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.164755    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.176836    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.185553    9752 command_runner.go:130] > 3ec20f2e
	I0603 14:53:56.198458    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:53:56.230025    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:53:56.262595    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.270502    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.270602    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.282789    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.291451    9752 command_runner.go:130] > b5213941
	I0603 14:53:56.303855    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:53:56.336817    9752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:53:56.342957    9752 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:53:56.342957    9752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:53:56.342957    9752 kubeadm.go:928] updating node {m02 172.22.149.253 8443 v1.30.1 docker false true} ...
	I0603 14:53:56.342957    9752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.149.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:53:56.355056    9752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubeadm
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubectl
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubelet
	I0603 14:53:56.375351    9752 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:53:56.387278    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 14:53:56.404379    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0603 14:53:56.435350    9752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:53:56.480805    9752 ssh_runner.go:195] Run: grep 172.22.154.20	control-plane.minikube.internal$ /etc/hosts
	I0603 14:53:56.490209    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.154.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:53:56.526496    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:56.747019    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:53:56.779998    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:56.780882    9752 start.go:316] joinCluster: &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.149.253 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:53:56.781145    9752 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.22.149.253 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 14:53:56.781145    9752 host.go:66] Checking if "multinode-720500-m02" exists ...
	I0603 14:53:56.781863    9752 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:53:56.782528    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:56.783026    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:53:59.028919    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:59.029330    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:59.029330    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:59.029932    9752 api_server.go:166] Checking apiserver status ...
	I0603 14:53:59.042835    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:53:59.042835    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:54:03.879463    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:54:03.879463    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:54:03.879712    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:54:03.992356    9752 command_runner.go:130] > 1877
	I0603 14:54:03.992489    9752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9496137s)
	I0603 14:54:04.008380    9752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	W0603 14:54:04.029059    9752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 14:54:04.042353    9752 ssh_runner.go:195] Run: ls
	I0603 14:54:04.050957    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:54:04.057746    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:54:04.070207    9752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-720500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0603 14:54:04.255055    9752 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-fmfz2, kube-system/kube-proxy-sm9rr
	I0603 14:54:07.287734    9752 command_runner.go:130] > node/multinode-720500-m02 cordoned
	I0603 14:54:07.287734    9752 command_runner.go:130] > pod "busybox-fc5497c4f-mjhcf" has DeletionTimestamp older than 1 seconds, skipping
	I0603 14:54:07.287734    9752 command_runner.go:130] > node/multinode-720500-m02 drained
	I0603 14:54:07.287999    9752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-720500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2177653s)
	I0603 14:54:07.288088    9752 node.go:128] successfully drained node "multinode-720500-m02"
	I0603 14:54:07.288155    9752 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0603 14:54:07.288250    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:54:09.479975    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:54:09.479975    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:54:09.480229    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-720500" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-720500
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-720500: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-720500" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-720500	172.22.150.195
multinode-720500-m02	172.22.146.196
multinode-720500-m03	172.22.151.134

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-720500 -n multinode-720500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-720500 -n multinode-720500: (12.4435166s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 logs -n 25: (11.4580862s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-720500 cp testdata\cp-test.txt                                                                                 | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:38 UTC | 03 Jun 24 14:38 UTC |
	|         | multinode-720500-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:38 UTC | 03 Jun 24 14:39 UTC |
	|         | multinode-720500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:39 UTC | 03 Jun 24 14:39 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:39 UTC | 03 Jun 24 14:39 UTC |
	|         | multinode-720500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:39 UTC | 03 Jun 24 14:39 UTC |
	|         | multinode-720500:/home/docker/cp-test_multinode-720500-m02_multinode-720500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:39 UTC | 03 Jun 24 14:39 UTC |
	|         | multinode-720500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n multinode-720500 sudo cat                                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:39 UTC | 03 Jun 24 14:40 UTC |
	|         | /home/docker/cp-test_multinode-720500-m02_multinode-720500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:40 UTC |
	|         | multinode-720500-m03:/home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:40 UTC |
	|         | multinode-720500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n multinode-720500-m03 sudo cat                                                                    | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:40 UTC |
	|         | /home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp testdata\cp-test.txt                                                                                 | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:40 UTC |
	|         | multinode-720500-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:40 UTC |
	|         | multinode-720500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:40 UTC | 03 Jun 24 14:41 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:41 UTC | 03 Jun 24 14:41 UTC |
	|         | multinode-720500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:41 UTC | 03 Jun 24 14:41 UTC |
	|         | multinode-720500:/home/docker/cp-test_multinode-720500-m03_multinode-720500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:41 UTC | 03 Jun 24 14:41 UTC |
	|         | multinode-720500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n multinode-720500 sudo cat                                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:41 UTC | 03 Jun 24 14:41 UTC |
	|         | /home/docker/cp-test_multinode-720500-m03_multinode-720500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt                                                        | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:41 UTC | 03 Jun 24 14:42 UTC |
	|         | multinode-720500-m02:/home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n                                                                                                  | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:42 UTC | 03 Jun 24 14:42 UTC |
	|         | multinode-720500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-720500 ssh -n multinode-720500-m02 sudo cat                                                                    | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:42 UTC | 03 Jun 24 14:42 UTC |
	|         | /home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-720500 node stop m03                                                                                           | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:42 UTC | 03 Jun 24 14:42 UTC |
	| node    | multinode-720500 node start                                                                                              | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:43 UTC | 03 Jun 24 14:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-720500                                                                                                 | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:46 UTC |                     |
	| stop    | -p multinode-720500                                                                                                      | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:46 UTC | 03 Jun 24 14:48 UTC |
	| start   | -p multinode-720500                                                                                                      | multinode-720500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 14:48 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 14:48:29
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 14:48:29.033726    9752 out.go:291] Setting OutFile to fd 1608 ...
	I0603 14:48:29.034543    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:48:29.034543    9752 out.go:304] Setting ErrFile to fd 1204...
	I0603 14:48:29.034543    9752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:48:29.059913    9752 out.go:298] Setting JSON to false
	I0603 14:48:29.065561    9752 start.go:129] hostinfo: {"hostname":"minikube3","uptime":27037,"bootTime":1717399071,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 14:48:29.066135    9752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 14:48:29.170301    9752 out.go:177] * [multinode-720500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 14:48:29.228986    9752 notify.go:220] Checking for updates...
	I0603 14:48:29.260718    9752 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:48:29.270991    9752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:48:29.312877    9752 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 14:48:29.323929    9752 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:48:29.359902    9752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:48:29.367166    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:48:29.367549    9752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:48:34.915447    9752 out.go:177] * Using the hyperv driver based on existing profile
	I0603 14:48:34.926221    9752 start.go:297] selected driver: hyperv
	I0603 14:48:34.926282    9752 start.go:901] validating driver "hyperv" against &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:48:34.926282    9752 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:48:34.983615    9752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:48:34.983615    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:48:34.983615    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:48:34.984134    9752 start.go:340] cluster config:
	{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.150.195 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:48:34.984134    9752 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:48:35.116720    9752 out.go:177] * Starting "multinode-720500" primary control-plane node in "multinode-720500" cluster
	I0603 14:48:35.126028    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:48:35.126360    9752 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 14:48:35.126360    9752 cache.go:56] Caching tarball of preloaded images
	I0603 14:48:35.126929    9752 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:48:35.127075    9752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:48:35.127075    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:48:35.129977    9752 start.go:360] acquireMachinesLock for multinode-720500: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:48:35.129977    9752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-720500"
	I0603 14:48:35.130979    9752 start.go:96] Skipping create...Using existing machine configuration
	I0603 14:48:35.130979    9752 fix.go:54] fixHost starting: 
	I0603 14:48:35.131216    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:37.961475    9752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 14:48:37.962232    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:37.962555    9752 fix.go:112] recreateIfNeeded on multinode-720500: state=Stopped err=<nil>
	W0603 14:48:37.962610    9752 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 14:48:37.966652    9752 out.go:177] * Restarting existing hyperv VM for "multinode-720500" ...
	I0603 14:48:37.969729    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:41.039660    9752 main.go:141] libmachine: Waiting for host to start...
	I0603 14:48:41.039660    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:43.342153    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:43.342904    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:43.342960    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:45.881880    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:45.881880    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:46.884117    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:49.103915    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:49.104037    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:49.104037    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:51.648696    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:51.649337    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:52.656704    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:48:54.893056    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:48:54.893056    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:54.893965    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:48:57.449195    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:48:57.449195    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:48:58.454090    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:00.713698    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:00.713919    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:00.713919    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:03.303429    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:49:03.303429    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:04.313395    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:06.563037    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:06.563373    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:06.563373    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:09.121286    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:09.121375    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:09.124435    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:11.266115    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:11.266115    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:11.267086    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:13.790586    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:13.791715    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:13.792040    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:49:13.794642    9752 machine.go:94] provisionDockerMachine start ...
	I0603 14:49:13.794903    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:15.909412    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:15.909412    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:15.909637    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:18.439632    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:18.440518    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:18.446685    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:18.447432    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:18.447432    9752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:49:18.575024    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:49:18.575024    9752 buildroot.go:166] provisioning hostname "multinode-720500"
	I0603 14:49:18.575257    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:20.715549    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:20.716567    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:20.716567    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:23.280598    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:23.280654    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:23.286807    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:23.286975    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:23.286975    9752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500 && echo "multinode-720500" | sudo tee /etc/hostname
	I0603 14:49:23.445247    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500
	
	I0603 14:49:23.445247    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:25.560706    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:25.560706    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:25.561383    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:28.078930    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:28.078930    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:28.084893    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:28.085420    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:28.085420    9752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:49:28.238233    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:49:28.238300    9752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:49:28.238366    9752 buildroot.go:174] setting up certificates
	I0603 14:49:28.238428    9752 provision.go:84] configureAuth start
	I0603 14:49:28.238496    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:30.360753    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:30.360898    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:30.360898    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:32.921871    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:35.053432    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:35.053432    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:35.054034    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:37.619479    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:37.619705    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:37.619823    9752 provision.go:143] copyHostCerts
	I0603 14:49:37.619914    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:49:37.620347    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:49:37.620347    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:49:37.620796    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:49:37.622012    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:49:37.622208    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:49:37.622306    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:49:37.622649    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:49:37.623828    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:49:37.624080    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:49:37.624156    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:49:37.624551    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:49:37.625494    9752 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500 san=[127.0.0.1 172.22.154.20 localhost minikube multinode-720500]
	I0603 14:49:37.848064    9752 provision.go:177] copyRemoteCerts
	I0603 14:49:37.860989    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:49:37.860989    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:39.985608    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:39.985608    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:39.985742    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:42.500636    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:42.501485    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:42.501572    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:49:42.606230    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7441646s)
	I0603 14:49:42.606300    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:49:42.606805    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 14:49:42.653354    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:49:42.653354    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:49:42.701189    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:49:42.701189    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 14:49:42.751247    9752 provision.go:87] duration metric: took 14.5126318s to configureAuth
	I0603 14:49:42.751404    9752 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:49:42.752015    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:49:42.752228    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:44.879240    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:44.879240    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:44.880170    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:47.388154    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:47.388154    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:47.395274    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:47.395274    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:47.395274    9752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:49:47.523619    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:49:47.523681    9752 buildroot.go:70] root file system type: tmpfs
	I0603 14:49:47.523900    9752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:49:47.523972    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:49.624987    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:49.625060    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:49.625132    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:52.152605    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:52.153750    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:52.159533    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:52.160219    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:52.160219    9752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:49:52.325685    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:49:52.325810    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:49:54.446568    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:49:54.447653    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:54.447653    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:49:56.946899    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:49:56.947038    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:49:56.954307    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:49:56.955367    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:49:56.955541    9752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:49:59.453668    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:49:59.453668    9752 machine.go:97] duration metric: took 45.6585468s to provisionDockerMachine
	I0603 14:49:59.453668    9752 start.go:293] postStartSetup for "multinode-720500" (driver="hyperv")
	I0603 14:49:59.453668    9752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:49:59.465656    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:49:59.466651    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:01.597546    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:01.598582    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:01.598623    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:04.123124    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:04.123124    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:04.124085    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:04.232405    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7657143s)
	I0603 14:50:04.247578    9752 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:50:04.255257    9752 command_runner.go:130] > NAME=Buildroot
	I0603 14:50:04.255257    9752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:50:04.255257    9752 command_runner.go:130] > ID=buildroot
	I0603 14:50:04.255257    9752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:50:04.255257    9752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:50:04.255390    9752 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:50:04.255390    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:50:04.256096    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:50:04.256950    9752 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:50:04.256997    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:50:04.272630    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:50:04.294656    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:50:04.342460    9752 start.go:296] duration metric: took 4.8887521s for postStartSetup
	I0603 14:50:04.342460    9752 fix.go:56] duration metric: took 1m29.210749s for fixHost
	I0603 14:50:04.342460    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:06.506928    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:06.506928    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:06.507770    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:08.999719    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:09.000025    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:09.005781    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:50:09.006397    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:50:09.006397    9752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:50:09.147055    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717426209.149042022
	
	I0603 14:50:09.147198    9752 fix.go:216] guest clock: 1717426209.149042022
	I0603 14:50:09.147198    9752 fix.go:229] Guest: 2024-06-03 14:50:09.149042022 +0000 UTC Remote: 2024-06-03 14:50:04.3424603 +0000 UTC m=+95.473466101 (delta=4.806581722s)
	I0603 14:50:09.147338    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:11.257684    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:11.257684    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:11.258609    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:13.800759    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:13.800930    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:13.806913    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:50:13.807365    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.154.20 22 <nil> <nil>}
	I0603 14:50:13.807365    9752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717426209
	I0603 14:50:13.944040    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:50:09 UTC 2024
	
	I0603 14:50:13.944040    9752 fix.go:236] clock set: Mon Jun  3 14:50:09 UTC 2024
	 (err=<nil>)
	I0603 14:50:13.944040    9752 start.go:83] releasing machines lock for "multinode-720500", held for 1m38.813253s
	I0603 14:50:13.944568    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:16.056880    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:16.057247    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:16.057383    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:18.573159    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:18.573287    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:18.577870    9752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:50:18.577959    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:18.588715    9752 ssh_runner.go:195] Run: cat /version.json
	I0603 14:50:18.588715    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:20.781452    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:20.782890    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:50:20.782890    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:20.783064    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:50:23.480985    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:23.481273    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:23.481273    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:23.499831    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:50:23.500315    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:50:23.500489    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:50:23.664510    9752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:50:23.664510    9752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0865094s)
	I0603 14:50:23.664510    9752 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 14:50:23.664868    9752 ssh_runner.go:235] Completed: cat /version.json: (5.0761106s)
	I0603 14:50:23.676417    9752 ssh_runner.go:195] Run: systemctl --version
	I0603 14:50:23.685755    9752 command_runner.go:130] > systemd 252 (252)
	I0603 14:50:23.685942    9752 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 14:50:23.698723    9752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:50:23.707730    9752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 14:50:23.708130    9752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:50:23.718836    9752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:50:23.745447    9752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:50:23.746088    9752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:50:23.746088    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:50:23.746413    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:50:23.779239    9752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:50:23.791357    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:50:23.821391    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:50:23.839481    9752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:50:23.852034    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:50:23.881821    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:50:23.915768    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:50:23.946659    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:50:23.977991    9752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:50:24.007673    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:50:24.039790    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:50:24.079146    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:50:24.111707    9752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:50:24.130086    9752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:50:24.142239    9752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:50:24.178614    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:24.387612    9752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:50:24.419480    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:50:24.432571    9752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:50:24.454094    9752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:50:24.454094    9752 command_runner.go:130] > [Unit]
	I0603 14:50:24.454094    9752 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:50:24.454094    9752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:50:24.454403    9752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:50:24.454403    9752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:50:24.454403    9752 command_runner.go:130] > StartLimitBurst=3
	I0603 14:50:24.454465    9752 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:50:24.454465    9752 command_runner.go:130] > [Service]
	I0603 14:50:24.454465    9752 command_runner.go:130] > Type=notify
	I0603 14:50:24.454465    9752 command_runner.go:130] > Restart=on-failure
	I0603 14:50:24.454465    9752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:50:24.454465    9752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:50:24.454465    9752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:50:24.454465    9752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:50:24.454465    9752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:50:24.454465    9752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:50:24.454465    9752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecStart=
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:50:24.454465    9752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:50:24.454465    9752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > LimitCORE=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:50:24.454465    9752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:50:24.454465    9752 command_runner.go:130] > TasksMax=infinity
	I0603 14:50:24.454465    9752 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:50:24.454465    9752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:50:24.455042    9752 command_runner.go:130] > Delegate=yes
	I0603 14:50:24.455150    9752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:50:24.455150    9752 command_runner.go:130] > KillMode=process
	I0603 14:50:24.455150    9752 command_runner.go:130] > [Install]
	I0603 14:50:24.455150    9752 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:50:24.468304    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:50:24.503178    9752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:50:24.542792    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:50:24.577927    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:50:24.612015    9752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:50:24.671151    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:50:24.691092    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:50:24.723859    9752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:50:24.738187    9752 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:50:24.744529    9752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:50:24.755198    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:50:24.773151    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:50:24.816336    9752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:50:25.023790    9752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:50:25.225274    9752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:50:25.225549    9752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:50:25.270969    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:25.473279    9752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:50:28.102687    9752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.628383s)
	I0603 14:50:28.114992    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:50:28.156703    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:50:28.193229    9752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:50:28.396266    9752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:50:28.611450    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:28.808534    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:50:28.848776    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:50:28.884709    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:29.087319    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:50:29.201633    9752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:50:29.214914    9752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:50:29.223057    9752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:50:29.223116    9752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:50:29.223153    9752 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0603 14:50:29.223153    9752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:50:29.223153    9752 command_runner.go:130] > Access: 2024-06-03 14:50:29.114679823 +0000
	I0603 14:50:29.223153    9752 command_runner.go:130] > Modify: 2024-06-03 14:50:29.114679823 +0000
	I0603 14:50:29.223223    9752 command_runner.go:130] > Change: 2024-06-03 14:50:29.119679828 +0000
	I0603 14:50:29.223223    9752 command_runner.go:130] >  Birth: -
	I0603 14:50:29.223282    9752 start.go:562] Will wait 60s for crictl version
	I0603 14:50:29.235862    9752 ssh_runner.go:195] Run: which crictl
	I0603 14:50:29.242226    9752 command_runner.go:130] > /usr/bin/crictl
	I0603 14:50:29.253215    9752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:50:29.307257    9752 command_runner.go:130] > Version:  0.1.0
	I0603 14:50:29.307340    9752 command_runner.go:130] > RuntimeName:  docker
	I0603 14:50:29.307340    9752 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:50:29.307381    9752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:50:29.307381    9752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:50:29.317342    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:50:29.349500    9752 command_runner.go:130] > 26.0.2
	I0603 14:50:29.359517    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:50:29.389620    9752 command_runner.go:130] > 26.0.2
	I0603 14:50:29.394562    9752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:50:29.394562    9752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:50:29.399573    9752 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:50:29.401870    9752 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:50:29.401870    9752 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:50:29.416773    9752 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:50:29.423378    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:50:29.444808    9752 kubeadm.go:877] updating cluster {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 14:50:29.445780    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:50:29.455433    9752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:50:29.479242    9752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 14:50:29.479839    9752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 14:50:29.479839    9752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 14:50:29.479903    9752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 14:50:29.479903    9752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 14:50:29.479903    9752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:50:29.479903    9752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 14:50:29.480099    9752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 14:50:29.480194    9752 docker.go:615] Images already preloaded, skipping extraction
	I0603 14:50:29.490256    9752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0603 14:50:29.515638    9752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0603 14:50:29.515688    9752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 14:50:29.515688    9752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0603 14:50:29.515755    9752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0603 14:50:29.515755    9752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0603 14:50:29.515819    9752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0603 14:50:29.515819    9752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 14:50:29.515819    9752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0603 14:50:29.515885    9752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0603 14:50:29.515925    9752 cache_images.go:84] Images are preloaded, skipping loading
	I0603 14:50:29.515992    9752 kubeadm.go:928] updating node { 172.22.154.20 8443 v1.30.1 docker true true} ...
	I0603 14:50:29.516257    9752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.154.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:50:29.526981    9752 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0603 14:50:29.557673    9752 command_runner.go:130] > cgroupfs
	I0603 14:50:29.559006    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:50:29.559006    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:50:29.559072    9752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 14:50:29.559127    9752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.22.154.20 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-720500 NodeName:multinode-720500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.22.154.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.22.154.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 14:50:29.559289    9752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.22.154.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-720500"
	  kubeletExtraArgs:
	    node-ip: 172.22.154.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 14:50:29.572579    9752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubeadm
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubectl
	I0603 14:50:29.590342    9752 command_runner.go:130] > kubelet
	I0603 14:50:29.590342    9752 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:50:29.603028    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 14:50:29.619684    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 14:50:29.648429    9752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:50:29.679305    9752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 14:50:29.725797    9752 ssh_runner.go:195] Run: grep 172.22.154.20	control-plane.minikube.internal$ /etc/hosts
	I0603 14:50:29.731212    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.154.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:50:29.762682    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:29.964153    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:50:29.992948    9752 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.154.20
	I0603 14:50:29.993022    9752 certs.go:194] generating shared ca certs ...
	I0603 14:50:29.993022    9752 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:29.993685    9752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:50:29.994104    9752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:50:29.994405    9752 certs.go:256] generating profile certs ...
	I0603 14:50:29.994787    9752 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\client.key
	I0603 14:50:29.994787    9752 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185
	I0603 14:50:29.995403    9752 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.22.154.20]
	I0603 14:50:30.282819    9752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 ...
	I0603 14:50:30.282819    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185: {Name:mk3ce09f3dfeb295693de4a303e0d19d5ad4f0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:30.284094    9752 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185 ...
	I0603 14:50:30.284094    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185: {Name:mk72162fc69bc37c51dc41730eaf528bd7879cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:30.290035    9752 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt.fba88185 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt
	I0603 14:50:30.296118    9752 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key.fba88185 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key
	I0603 14:50:30.302065    9752 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key
	I0603 14:50:30.302065    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:50:30.302065    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:50:30.302853    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 14:50:30.302916    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 14:50:30.303446    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 14:50:30.303743    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 14:50:30.304061    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:50:30.304584    9752 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:50:30.304755    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:50:30.304827    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:50:30.304827    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:50:30.305649    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:50:30.306167    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:50:30.306446    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:50:30.306650    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.306844    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:50:30.308384    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:50:30.357242    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:50:30.408052    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:50:30.466550    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:50:30.509530    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 14:50:30.552860    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 14:50:30.598562    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 14:50:30.641657    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 14:50:30.685377    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:50:30.729265    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:50:30.772687    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:50:30.814997    9752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 14:50:30.857563    9752 ssh_runner.go:195] Run: openssl version
	I0603 14:50:30.866181    9752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:50:30.879178    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:50:30.910588    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.917811    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.917919    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.930458    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:50:30.938518    9752 command_runner.go:130] > b5213941
	I0603 14:50:30.951780    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:50:30.983814    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:50:31.014838    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.022141    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.022693    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.034123    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:50:31.042974    9752 command_runner.go:130] > 51391683
	I0603 14:50:31.055159    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:50:31.091504    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:50:31.122571    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.129679    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.130694    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.142979    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:50:31.151940    9752 command_runner.go:130] > 3ec20f2e
	I0603 14:50:31.165559    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:50:31.196576    9752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:50:31.203514    9752 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:50:31.203514    9752 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 14:50:31.203514    9752 command_runner.go:130] > Device: 8,1	Inode: 5243218     Links: 1
	I0603 14:50:31.203514    9752 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 14:50:31.203514    9752 command_runner.go:130] > Access: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] > Modify: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] > Change: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.203514    9752 command_runner.go:130] >  Birth: 2024-06-03 14:27:05.373933748 +0000
	I0603 14:50:31.214709    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 14:50:31.223631    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.236029    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 14:50:31.244712    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.256468    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 14:50:31.266297    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.279817    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 14:50:31.289926    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.303055    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 14:50:31.313094    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.326077    9752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 14:50:31.335901    9752 command_runner.go:130] > Certificate will not expire
	I0603 14:50:31.336096    9752 kubeadm.go:391] StartCluster: {Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.146.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:50:31.346639    9752 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 14:50:31.383771    9752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0603 14:50:31.402548    9752 command_runner.go:130] > /var/lib/minikube/etcd:
	I0603 14:50:31.402548    9752 command_runner.go:130] > member
	W0603 14:50:31.403604    9752 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 14:50:31.403604    9752 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 14:50:31.403604    9752 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 14:50:31.415631    9752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 14:50:31.433674    9752 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 14:50:31.435767    9752 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-720500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:31.436276    9752 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-720500" cluster setting kubeconfig missing "multinode-720500" context setting]
	I0603 14:50:31.436642    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:31.452263    9752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:31.452912    9752 kapi.go:59] client config for multinode-720500: &rest.Config{Host:"https://172.22.154.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-720500/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bbd8a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 14:50:31.454810    9752 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 14:50:31.466380    9752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 14:50:31.489965    9752 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0603 14:50:31.489965    9752 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0603 14:50:31.489965    9752 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 14:50:31.489965    9752 command_runner.go:130] >  kind: InitConfiguration
	I0603 14:50:31.489965    9752 command_runner.go:130] >  localAPIEndpoint:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -  advertiseAddress: 172.22.150.195
	I0603 14:50:31.489965    9752 command_runner.go:130] > +  advertiseAddress: 172.22.154.20
	I0603 14:50:31.489965    9752 command_runner.go:130] >    bindPort: 8443
	I0603 14:50:31.489965    9752 command_runner.go:130] >  bootstrapTokens:
	I0603 14:50:31.489965    9752 command_runner.go:130] >    - groups:
	I0603 14:50:31.489965    9752 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0603 14:50:31.489965    9752 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0603 14:50:31.489965    9752 command_runner.go:130] >    name: "multinode-720500"
	I0603 14:50:31.489965    9752 command_runner.go:130] >    kubeletExtraArgs:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -    node-ip: 172.22.150.195
	I0603 14:50:31.489965    9752 command_runner.go:130] > +    node-ip: 172.22.154.20
	I0603 14:50:31.489965    9752 command_runner.go:130] >    taints: []
	I0603 14:50:31.489965    9752 command_runner.go:130] >  ---
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0603 14:50:31.489965    9752 command_runner.go:130] >  kind: ClusterConfiguration
	I0603 14:50:31.489965    9752 command_runner.go:130] >  apiServer:
	I0603 14:50:31.489965    9752 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.22.150.195"]
	I0603 14:50:31.489965    9752 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	I0603 14:50:31.489965    9752 command_runner.go:130] >    extraArgs:
	I0603 14:50:31.489965    9752 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0603 14:50:31.489965    9752 command_runner.go:130] >  controllerManager:
	I0603 14:50:31.489965    9752 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.22.150.195
	+  advertiseAddress: 172.22.154.20
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-720500"
	   kubeletExtraArgs:
	-    node-ip: 172.22.150.195
	+    node-ip: 172.22.154.20
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.22.150.195"]
	+  certSANs: ["127.0.0.1", "localhost", "172.22.154.20"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0603 14:50:31.489965    9752 kubeadm.go:1154] stopping kube-system containers ...
	I0603 14:50:31.495744    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0603 14:50:31.524883    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:50:31.524883    9752 command_runner.go:130] > 097ab9a9a33b
	I0603 14:50:31.524883    9752 command_runner.go:130] > 38b548c7f105
	I0603 14:50:31.524883    9752 command_runner.go:130] > 1ac710138e87
	I0603 14:50:31.524883    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:50:31.524883    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:50:31.524883    9752 command_runner.go:130] > 91df341636e8
	I0603 14:50:31.524883    9752 command_runner.go:130] > 45c98b77811e
	I0603 14:50:31.524883    9752 command_runner.go:130] > dcd798ff8a46
	I0603 14:50:31.524883    9752 command_runner.go:130] > 5185046feae6
	I0603 14:50:31.524883    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:50:31.524883    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:50:31.524883    9752 command_runner.go:130] > 19b3080db261
	I0603 14:50:31.524883    9752 command_runner.go:130] > 73f8312902b0
	I0603 14:50:31.524883    9752 command_runner.go:130] > bf3e16838818
	I0603 14:50:31.524883    9752 command_runner.go:130] > 7dbe33ccede8
	I0603 14:50:31.524883    9752 docker.go:483] Stopping containers: [68e49c3e6dda 097ab9a9a33b 38b548c7f105 1ac710138e87 ab840a6a9856 3823f2e2bdb2 91df341636e8 45c98b77811e dcd798ff8a46 5185046feae6 63a6ebee2e83 ec3860b2bb3e 19b3080db261 73f8312902b0 bf3e16838818 7dbe33ccede8]
	I0603 14:50:31.537637    9752 ssh_runner.go:195] Run: docker stop 68e49c3e6dda 097ab9a9a33b 38b548c7f105 1ac710138e87 ab840a6a9856 3823f2e2bdb2 91df341636e8 45c98b77811e dcd798ff8a46 5185046feae6 63a6ebee2e83 ec3860b2bb3e 19b3080db261 73f8312902b0 bf3e16838818 7dbe33ccede8
	I0603 14:50:31.565425    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:50:31.565568    9752 command_runner.go:130] > 097ab9a9a33b
	I0603 14:50:31.565568    9752 command_runner.go:130] > 38b548c7f105
	I0603 14:50:31.565568    9752 command_runner.go:130] > 1ac710138e87
	I0603 14:50:31.565623    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:50:31.565623    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:50:31.565623    9752 command_runner.go:130] > 91df341636e8
	I0603 14:50:31.565659    9752 command_runner.go:130] > 45c98b77811e
	I0603 14:50:31.565659    9752 command_runner.go:130] > dcd798ff8a46
	I0603 14:50:31.565697    9752 command_runner.go:130] > 5185046feae6
	I0603 14:50:31.565697    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:50:31.565731    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:50:31.565731    9752 command_runner.go:130] > 19b3080db261
	I0603 14:50:31.565731    9752 command_runner.go:130] > 73f8312902b0
	I0603 14:50:31.565731    9752 command_runner.go:130] > bf3e16838818
	I0603 14:50:31.565731    9752 command_runner.go:130] > 7dbe33ccede8
	I0603 14:50:31.578802    9752 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 14:50:31.617716    9752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 14:50:31.635887    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0603 14:50:31.635887    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0603 14:50:31.636645    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0603 14:50:31.636645    9752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:50:31.636967    9752 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:50:31.637025    9752 kubeadm.go:156] found existing configuration files:
	
	I0603 14:50:31.648483    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 14:50:31.665306    9752 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:50:31.665385    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:50:31.677521    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 14:50:31.709088    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 14:50:31.725891    9752 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:50:31.726839    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:50:31.739642    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 14:50:31.769317    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 14:50:31.786917    9752 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:50:31.787226    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:50:31.800374    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 14:50:31.833312    9752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 14:50:31.851422    9752 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:50:31.852393    9752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:50:31.864186    9752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 14:50:31.894499    9752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 14:50:31.913712    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0603 14:50:32.213078    9752 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0603 14:50:32.213204    9752 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0603 14:50:32.213297    9752 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 14:50:32.213345    9752 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 14:50:32.213345    9752 command_runner.go:130] > [certs] Using the existing "sa" key
	I0603 14:50:32.213345    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:50:33.401490    9752 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:50:33.401490    9752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1881348s)
	I0603 14:50:33.401490    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:50:33.713996    9752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0603 14:50:33.714130    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.794194    9752 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:50:33.794286    9752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:50:33.794360    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:33.890515    9752 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:50:33.890515    9752 api_server.go:52] waiting for apiserver process to appear ...
	I0603 14:50:33.903721    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:34.406708    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:34.912875    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.407053    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.907388    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:50:35.938200    9752 command_runner.go:130] > 1877
	I0603 14:50:35.938200    9752 api_server.go:72] duration metric: took 2.0476689s to wait for apiserver process to appear ...
	I0603 14:50:35.938200    9752 api_server.go:88] waiting for apiserver healthz status ...
	I0603 14:50:35.938200    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.322888    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 14:50:39.323845    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 14:50:39.323881    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.392354    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 14:50:39.392354    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 14:50:39.445637    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.461120    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:39.461188    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:39.948070    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:39.964441    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:39.964652    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:40.438860    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:40.450090    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 14:50:40.450232    9752 api_server.go:103] status: https://172.22.154.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 14:50:40.945934    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:50:40.953114    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:50:40.954001    9752 round_trippers.go:463] GET https://172.22.154.20:8443/version
	I0603 14:50:40.954077    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:40.954077    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:40.954171    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:40.970045    9752 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 14:50:40.970045    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:40.970045    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:40.970045    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Content-Length: 263
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:40 GMT
	I0603 14:50:40.970045    9752 round_trippers.go:580]     Audit-Id: 768ed4ca-76db-429c-9788-7f3f81fb4cdd
	I0603 14:50:40.970257    9752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 14:50:40.970353    9752 api_server.go:141] control plane version: v1.30.1
	I0603 14:50:40.970460    9752 api_server.go:131] duration metric: took 5.0322185s to wait for apiserver health ...
	I0603 14:50:40.970460    9752 cni.go:84] Creating CNI manager for ""
	I0603 14:50:40.970513    9752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 14:50:40.974328    9752 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 14:50:40.988680    9752 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 14:50:41.002893    9752 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0603 14:50:41.002989    9752 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0603 14:50:41.002989    9752 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0603 14:50:41.002989    9752 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 14:50:41.002989    9752 command_runner.go:130] > Access: 2024-06-03 14:49:06.725646200 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] > Change: 2024-06-03 14:48:56.608000000 +0000
	I0603 14:50:41.002989    9752 command_runner.go:130] >  Birth: -
	I0603 14:50:41.002989    9752 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 14:50:41.003157    9752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 14:50:41.100030    9752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 14:50:42.138239    9752 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0603 14:50:42.138459    9752 command_runner.go:130] > daemonset.apps/kindnet configured
	I0603 14:50:42.138528    9752 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0384887s)
	I0603 14:50:42.138636    9752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 14:50:42.138837    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:50:42.138872    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.138872    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.138872    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.149280    9752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:50:42.149639    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Audit-Id: 7117e1ad-541b-4bc1-ba2a-030ea5d6cdd6
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.149639    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.149639    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.149701    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.150979    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1818"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 79260 chars]
	I0603 14:50:42.157663    9752 system_pods.go:59] 11 kube-system pods found
	I0603 14:50:42.157663    9752 system_pods.go:61] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 14:50:42.157663    9752 system_pods.go:61] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:50:42.157663    9752 system_pods.go:61] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 14:50:42.157663    9752 system_pods.go:61] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0603 14:50:42.157663    9752 system_pods.go:74] duration metric: took 19.0038ms to wait for pod list to return data ...
	I0603 14:50:42.157663    9752 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:50:42.158251    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes
	I0603 14:50:42.158251    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.158251    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.158304    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.168418    9752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 14:50:42.168418    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.168418    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.168418    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Audit-Id: 6b446131-60ee-4ac0-982b-a319a74780bc
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.168418    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.168418    9752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1818"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0603 14:50:42.170628    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170710    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170743    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:50:42.170743    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:50:42.170743    9752 node_conditions.go:105] duration metric: took 13.0797ms to run NodePressure ...
	I0603 14:50:42.170794    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 14:50:42.550050    9752 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0603 14:50:42.550804    9752 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0603 14:50:42.550804    9752 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 14:50:42.550921    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0603 14:50:42.550921    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.550921    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.550921    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.572447    9752 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0603 14:50:42.572548    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.572548    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.572548    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.572548    9752 round_trippers.go:580]     Audit-Id: a94334cf-c1d1-4564-a53e-1dce5487adff
	I0603 14:50:42.572611    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.572649    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.572649    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.572785    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1824"},"items":[{"metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1805","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 21600 chars]
	I0603 14:50:42.574536    9752 kubeadm.go:733] kubelet initialised
	I0603 14:50:42.574646    9752 kubeadm.go:734] duration metric: took 23.8059ms waiting for restarted kubelet to initialise ...
	I0603 14:50:42.574646    9752 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:50:42.574797    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:50:42.574814    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.574850    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.574850    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.586083    9752 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0603 14:50:42.586310    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.586310    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.586310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.586310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.586310    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.586444    9752 round_trippers.go:580]     Audit-Id: bea86d3d-08ff-485f-a162-fcaf18e76504
	I0603 14:50:42.586444    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.588124    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1824"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 78667 chars]
	I0603 14:50:42.593888    9752 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.593888    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:50:42.593888    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.593888    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.593888    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.595656    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:50:42.595656    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Audit-Id: 3ca27f6b-0589-4bdb-bf10-84150c54e1ec
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.595656    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.595656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.595656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.596864    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:50:42.597540    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.597660    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.597660    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.597660    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.600170    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.600170    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.600170    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.600170    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Audit-Id: a5ad8d3a-7b10-4b4a-9613-05eb4bc81cd7
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.601001    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.601001    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.601327    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.601784    9752 pod_ready.go:97] node "multinode-720500" hosting pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.601865    9752 pod_ready.go:81] duration metric: took 7.9771ms for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.601865    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.601865    9752 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.601974    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:50:42.602049    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.602049    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.602049    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.604314    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.604314    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.604314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Audit-Id: d94e13bb-e31d-48d0-ab47-53ba905d0d78
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.604314    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.604718    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.604932    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1805","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0603 14:50:42.605459    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.605541    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.605541    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.605541    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.607964    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.607964    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.607964    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Audit-Id: a32951b2-e900-45b0-be5b-bd4000db1513
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.607964    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.607964    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.608962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.609589    9752 pod_ready.go:97] node "multinode-720500" hosting pod "etcd-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.609589    9752 pod_ready.go:81] duration metric: took 7.7238ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.609589    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "etcd-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.609589    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.609589    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:50:42.609589    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.609589    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.609589    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.618388    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:50:42.618388    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.618388    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Audit-Id: 808aabe5-a24b-413d-bc45-d73038d43a59
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.618388    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.618388    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.619159    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"1804","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0603 14:50:42.619409    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.619409    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.619409    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.619409    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.626215    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:42.626215    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.626215    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.626215    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.626215    9752 round_trippers.go:580]     Audit-Id: 1666dac5-4137-4733-8784-b21b0e7c81fc
	I0603 14:50:42.627001    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.627144    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-controller-manager-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.627144    9752 pod_ready.go:81] duration metric: took 17.5546ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.627144    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-controller-manager-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.627144    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.627144    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:50:42.627144    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.627144    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.627144    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.630018    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:42.630018    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.630018    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.630018    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.630018    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.630018    9752 round_trippers.go:580]     Audit-Id: 67c3b156-7901-4bb3-944a-ce49294335f6
	I0603 14:50:42.630539    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.630539    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.631184    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"1822","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0603 14:50:42.631756    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:42.631756    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.631756    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.631756    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.650970    9752 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0603 14:50:42.651331    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.651331    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.651331    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Audit-Id: 208ca559-880b-4c23-8d04-e71bf1f3f323
	I0603 14:50:42.651331    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.651413    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.651493    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:42.652095    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-proxy-64l9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.652156    9752 pod_ready.go:81] duration metric: took 25.0122ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.652156    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-proxy-64l9x" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:42.652156    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:42.761670    9752 request.go:629] Waited for 109.5131ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:50:42.762030    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:50:42.762117    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.762117    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.762117    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.766688    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:42.766899    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Audit-Id: 87f222ea-bd14-44a6-b1de-7fe3972342f5
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.767010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.767010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.767010    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.767303    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ctm5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"38069b1b-8ba9-46af-b4e7-7add5d9c67fc","resourceVersion":"1761","creationTimestamp":"2024-06-03T14:35:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:50:42.964358    9752 request.go:629] Waited for 196.0468ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:50:42.964358    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:50:42.964358    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:42.964358    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:42.964358    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:42.969724    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:42.969724    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:42.969724    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:42 GMT
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Audit-Id: 6716eae3-c43e-4b96-a6ac-6b25a3d3c482
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:42.969724    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:42.969724    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:42.972028    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m03","uid":"daf03ea9-c0d0-4565-9ad8-44cd4fce8e19","resourceVersion":"1770","creationTimestamp":"2024-06-03T14:46:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_46_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0603 14:50:42.972210    9752 pod_ready.go:97] node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:50:42.972210    9752 pod_ready.go:81] duration metric: took 320.0513ms for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:42.972210    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:50:42.972210    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:43.151706    9752 request.go:629] Waited for 178.7035ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:50:43.152034    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:50:43.152034    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.152034    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.152034    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.159849    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:50:43.159849    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Audit-Id: 7fe22f0d-acfb-4e87-aa89-658d771551f9
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.159849    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.159849    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.159849    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.159849    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sm9rr","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f0321c0-f47d-463e-bda2-919f37735748","resourceVersion":"1786","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:50:43.353269    9752 request.go:629] Waited for 192.6144ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:50:43.353531    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:50:43.353531    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.353609    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.353609    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.360310    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:43.360310    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.360310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.360310    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Audit-Id: 664327f0-76ca-48b2-9002-d728662e98e4
	I0603 14:50:43.360310    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.360310    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"1785","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4486 chars]
	I0603 14:50:43.361096    9752 pod_ready.go:97] node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:50:43.361096    9752 pod_ready.go:81] duration metric: took 388.8828ms for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:43.361096    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:50:43.361096    9752 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:50:43.555552    9752 request.go:629] Waited for 194.4545ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:50:43.555908    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:50:43.556042    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.556042    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.556042    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.559377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:43.559655    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Audit-Id: 43d57b5b-de71-46e9-9856-5ce7d54e6b4a
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.559655    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.559655    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.559772    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.559772    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.559911    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"1802","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0603 14:50:43.758561    9752 request.go:629] Waited for 197.4939ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:43.758650    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:43.758650    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:43.758880    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:43.758880    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:43.762595    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:43.762595    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:43.762595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:43.762595    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:43 GMT
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Audit-Id: 1049f630-b549-4500-960c-545477b71ae6
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:43.762802    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:43.762802    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:43.763290    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:43.763859    9752 pod_ready.go:97] node "multinode-720500" hosting pod "kube-scheduler-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:43.763859    9752 pod_ready.go:81] duration metric: took 402.7594ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	E0603 14:50:43.763929    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500" hosting pod "kube-scheduler-multinode-720500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500" has status "Ready":"False"
	I0603 14:50:43.763929    9752 pod_ready.go:38] duration metric: took 1.1892736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:50:43.763929    9752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 14:50:43.783079    9752 command_runner.go:130] > -16
	I0603 14:50:43.783079    9752 ops.go:34] apiserver oom_adj: -16
	I0603 14:50:43.783079    9752 kubeadm.go:591] duration metric: took 12.3793736s to restartPrimaryControlPlane
	I0603 14:50:43.783079    9752 kubeadm.go:393] duration metric: took 12.4468804s to StartCluster
	I0603 14:50:43.783079    9752 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:43.783634    9752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 14:50:43.786229    9752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:50:43.788934    9752 start.go:234] Will wait 6m0s for node &{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0603 14:50:43.788934    9752 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 14:50:43.793634    9752 out.go:177] * Verifying Kubernetes components...
	I0603 14:50:43.788934    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:50:43.798082    9752 out.go:177] * Enabled addons: 
	I0603 14:50:43.801075    9752 addons.go:510] duration metric: took 12.1411ms for enable addons: enabled=[]
	I0603 14:50:43.808206    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:50:44.080025    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:50:44.114641    9752 node_ready.go:35] waiting up to 6m0s for node "multinode-720500" to be "Ready" ...
	I0603 14:50:44.114641    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:44.114641    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:44.114641    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:44.114641    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:44.118171    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:44.118171    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:44.118171    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:44.118171    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:44.119147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:44 GMT
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Audit-Id: e111bc66-e96c-4449-9dfc-b7a08b199cd6
	I0603 14:50:44.119147    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:44.119355    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:44.619879    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:44.619879    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:44.619879    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:44.619994    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:44.624505    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:44.624549    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:44.624549    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:44.624549    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:44 GMT
	I0603 14:50:44.624549    9752 round_trippers.go:580]     Audit-Id: 11c8216b-bef0-4230-9940-6ce810c6b064
	I0603 14:50:44.624630    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:45.117020    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:45.117020    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:45.117020    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:45.117020    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:45.120599    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:45.120599    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Audit-Id: 8a723eff-9ee1-401c-b716-68f704c82417
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:45.121518    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:45.121518    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:45.121518    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:45 GMT
	I0603 14:50:45.121749    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:45.621560    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:45.621560    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:45.621560    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:45.621560    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:45.625483    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:45.625483    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Audit-Id: f80c4814-5055-481a-89cc-1799a3aff349
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:45.625483    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:45.625483    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:45.625483    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:45 GMT
	I0603 14:50:45.625483    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:46.127588    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:46.127588    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:46.127588    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:46.127588    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:46.141209    9752 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0603 14:50:46.141209    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:46.141209    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:46.141420    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:46 GMT
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Audit-Id: bb144405-5e94-401b-bd71-2656fb8db0c9
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:46.141420    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:46.144803    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:46.145315    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:46.631293    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:46.631293    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:46.631293    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:46.631293    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:46.634017    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:50:46.634017    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:46.634017    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:46 GMT
	I0603 14:50:46.634017    9752 round_trippers.go:580]     Audit-Id: 0a2480ec-37d6-4f5c-8779-be70230aa0c3
	I0603 14:50:46.635076    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:46.635076    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:46.635076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:46.635076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:46.635375    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:47.127871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:47.127871    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:47.128100    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:47.128100    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:47.131894    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:47.131894    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Audit-Id: fbe126e8-e878-426f-8527-30f8df41f7eb
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:47.131894    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:47.132878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:47.132878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:47.132878    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:47 GMT
	I0603 14:50:47.133318    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:47.615681    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:47.615763    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:47.615828    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:47.615828    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:47.619702    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:47.620263    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:47.620263    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:47 GMT
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Audit-Id: 9804b620-ec73-42fb-a04d-a99c32ddb9ba
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:47.620263    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:47.620263    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:47.620966    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.115800    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:48.115800    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:48.115800    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:48.115800    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:48.121383    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:48.121467    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Audit-Id: a4330963-e56f-4667-8df0-8ee19cd77160
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:48.121494    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:48.121494    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:48.121545    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:48.121545    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:48 GMT
	I0603 14:50:48.122858    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.616329    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:48.616329    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:48.616329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:48.616329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:48.620502    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:48.620502    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Audit-Id: f242052b-0d44-4c84-b52e-649abd5ee96b
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:48.620593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:48.620593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:48.620593    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:48 GMT
	I0603 14:50:48.621005    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:48.622096    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:49.116216    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:49.116216    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:49.116216    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:49.116216    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:49.119884    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:49.120656    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:49.120656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:49 GMT
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Audit-Id: b6a39755-e0d5-4d27-af05-f962c54952b3
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:49.120656    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:49.120656    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:49.120656    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:49.616840    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:49.616840    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:49.617053    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:49.617053    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:49.623173    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:50:49.623173    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:49.623173    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:49.623173    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:49 GMT
	I0603 14:50:49.623173    9752 round_trippers.go:580]     Audit-Id: 4ef321e7-4d0a-4a59-bbf5-7425d6368be2
	I0603 14:50:49.623694    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:49.623894    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.117132    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:50.117386    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:50.117443    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:50.117443    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:50.121727    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:50.121793    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:50.121793    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:50.121793    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:50 GMT
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Audit-Id: a065e32c-9413-4390-b230-45d724bd4c7a
	I0603 14:50:50.121793    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:50.121793    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.621134    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:50.621251    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:50.621316    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:50.621316    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:50.624993    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:50.625162    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Audit-Id: 7dff31d5-33f2-43b0-b384-136459e283f8
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:50.625162    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:50.625162    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:50.625162    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:50 GMT
	I0603 14:50:50.625845    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:50.626362    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:51.123803    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:51.123954    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:51.123954    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:51.123954    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:51.128574    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:51.128574    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Audit-Id: 63b0a5a8-2ee5-4fb9-9d1e-e164bf1ceab1
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:51.128574    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:51.128574    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:51.128574    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:51 GMT
	I0603 14:50:51.128574    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:51.625484    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:51.625569    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:51.625569    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:51.625569    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:51.628684    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:51.628684    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:51 GMT
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Audit-Id: 90c4c16c-1a15-49d6-ad6b-1caa95268a73
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:51.628684    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:51.628684    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:51.628684    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:51.630450    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:52.125931    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:52.125931    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:52.125931    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:52.125931    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:52.129550    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:52.129550    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:52 GMT
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Audit-Id: 142575e2-f9f6-4d54-b29a-e0f2c2257dbf
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:52.129550    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:52.129550    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:52.129550    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:52.129550    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1799","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0603 14:50:52.618507    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:52.618507    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:52.618507    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:52.618507    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:52.624114    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:52.624114    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:52.624114    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:52.624114    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:52.624114    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:52 GMT
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Audit-Id: cfb88214-40a5-42b2-b64d-da77a76991bb
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:52.624416    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:52.625101    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:53.120865    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:53.120865    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:53.120865    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:53.120865    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:53.125578    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:53.125578    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:53.125578    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:53 GMT
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Audit-Id: ed9f4b92-b597-427c-94c0-845d19732cb8
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:53.125843    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:53.125843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:53.125843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:53.126065    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:53.126189    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:53.618677    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:53.618677    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:53.618677    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:53.618677    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:53.622288    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:53.623102    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:53.623102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:53.623102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:53 GMT
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Audit-Id: 0a159cc5-3307-4f3e-bbed-7afb5f785f1e
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:53.623102    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:53.624369    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:54.118646    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:54.118646    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:54.118646    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:54.118646    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:54.122213    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:54.122213    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:54.122213    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:54.122843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:54.122843    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:54 GMT
	I0603 14:50:54.122843    9752 round_trippers.go:580]     Audit-Id: dfeec84e-0cfc-4606-b59c-19a0da83fa44
	I0603 14:50:54.122843    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:54.625959    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:54.626289    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:54.626289    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:54.626289    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:54.631578    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:54.632619    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:54.632619    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:54 GMT
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Audit-Id: 63234624-7d5b-4158-9c65-ba2a01220a7f
	I0603 14:50:54.632619    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:54.632709    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:54.632709    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:54.633044    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:55.125344    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:55.125344    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:55.125344    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:55.125344    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:55.129614    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:55.129614    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Audit-Id: 04711d7c-7579-4b8c-81e5-9337dadb9007
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:55.129614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:55.129614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:55.129614    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:55 GMT
	I0603 14:50:55.130187    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:55.130913    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:55.624434    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:55.624564    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:55.624564    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:55.624564    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:55.632792    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:50:55.632792    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Audit-Id: f7e03f1d-a29e-4f34-aac7-e6d5b46d1676
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:55.632792    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:55.632792    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:55.632792    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:55 GMT
	I0603 14:50:55.632792    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:56.126700    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:56.126700    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:56.126700    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:56.126785    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:56.131521    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:56.131584    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Audit-Id: 679db11a-a050-4318-a55a-218dfb801e32
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:56.131584    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:56.131584    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:56.131584    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:56 GMT
	I0603 14:50:56.132443    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:56.623012    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:56.623012    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:56.623012    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:56.623012    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:56.627893    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:56.627893    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Audit-Id: 21231a93-82fa-4d46-bd84-5cea81fbcdb9
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:56.627893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:56.627893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:56.627893    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:56 GMT
	I0603 14:50:56.627893    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.120771    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:57.120890    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:57.120890    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:57.120890    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:57.125709    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:50:57.126409    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Audit-Id: 21f3fae7-37ff-41ec-92bb-1ad85b073205
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:57.126409    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:57.126409    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:57.126409    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:57 GMT
	I0603 14:50:57.126538    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.620506    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:57.620506    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:57.620625    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:57.620625    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:57.625926    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:50:57.625926    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:57.626035    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:57.626035    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:57.626035    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:57 GMT
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Audit-Id: 5f7aeb7a-d5a1-4885-b93f-024c0895f285
	I0603 14:50:57.626099    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:57.626428    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:57.626633    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:50:58.121639    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:58.121639    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:58.121639    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:58.121639    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:58.125234    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:58.125234    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:58.125234    9752 round_trippers.go:580]     Audit-Id: 1e3d377b-5ea2-4ad8-af09-76102f22e181
	I0603 14:50:58.125234    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:58.125495    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:58.125495    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:58.125495    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:58.125495    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:58 GMT
	I0603 14:50:58.126430    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:58.618210    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:58.618474    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:58.618474    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:58.618474    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:58.621874    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:58.621874    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Audit-Id: 2459caa2-56c5-4a30-bf1b-b87d0287d38f
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:58.621874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:58.621874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:58.621874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:58 GMT
	I0603 14:50:58.622734    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.131147    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:59.131147    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:59.131147    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:59.131147    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:59.135357    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:59.135379    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:59.135379    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:59 GMT
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Audit-Id: 08a6700a-ec14-4dd8-b1b6-b901da8e9da6
	I0603 14:50:59.135379    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:59.135472    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:59.135472    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:59.135645    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.625621    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:50:59.625704    9752 round_trippers.go:469] Request Headers:
	I0603 14:50:59.625704    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:50:59.625704    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:50:59.629532    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:50:59.629532    9752 round_trippers.go:577] Response Headers:
	I0603 14:50:59.629925    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:50:59 GMT
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Audit-Id: 2b16e639-54d8-4963-9632-80e4cd30565b
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:50:59.629925    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:50:59.629925    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:50:59.629925    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:50:59.630762    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:00.118291    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:00.118533    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:00.118533    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:00.118533    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:00.122411    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:00.122411    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:00.122411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:00.122411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:00 GMT
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Audit-Id: e0d2ab5b-52b6-4f3f-ad61-ba5cb51f81aa
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:00.122480    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:00.122657    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:00.627785    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:00.627785    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:00.627785    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:00.628040    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:00.631259    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:00.632191    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Audit-Id: 16c4791e-2245-45f2-90d0-86de4b8c6f5a
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:00.632191    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:00.632191    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:00.632257    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:00.632257    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:00 GMT
	I0603 14:51:00.632352    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.120137    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:01.120137    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:01.120137    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:01.120137    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:01.127104    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:01.127104    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:01.127104    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:01 GMT
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Audit-Id: d20e5242-88bb-48d2-afdd-bf50550a0b8b
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:01.127104    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:01.127104    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:01.127104    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.627946    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:01.628303    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:01.628303    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:01.628303    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:01.631639    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:01.631929    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Audit-Id: 425a9b5b-ff68-4146-9f10-fe76a714a9be
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:01.631929    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:01.631929    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:01.631929    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:01 GMT
	I0603 14:51:01.632362    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:01.632853    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:02.122331    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:02.122331    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:02.122331    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:02.122629    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:02.127498    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:02.127498    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:02.127498    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:02.127498    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:02 GMT
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Audit-Id: 913a1be9-23fe-417a-bab1-1acb0afdfd10
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:02.127705    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:02.127705    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:02.127972    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:02.625414    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:02.625644    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:02.625644    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:02.625644    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:02.632515    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:02.632515    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:02.632515    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:02 GMT
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Audit-Id: 179f32dc-1fa8-4abb-b807-a9c2272e6df6
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:02.632515    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:02.632515    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:02.633248    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:03.125320    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:03.125320    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:03.125320    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:03.125320    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:03.131768    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:03.131861    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:03.131874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:03 GMT
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Audit-Id: 0749c042-ac75-4284-a6e3-dbe12850d383
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:03.131874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:03.131874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:03.133117    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:03.627026    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:03.627349    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:03.627349    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:03.627349    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:03.631893    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:03.631893    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:03.631893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:03.631893    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:03 GMT
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Audit-Id: e8f02c8c-7148-418a-8fe2-68db6af2fd17
	I0603 14:51:03.631893    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:03.631893    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:04.127071    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:04.127071    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:04.127071    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:04.127071    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:04.131594    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:04.131594    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:04.131594    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:04 GMT
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Audit-Id: 5f68bd15-8cc9-418b-8c8f-d5164128b955
	I0603 14:51:04.131594    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:04.132579    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:04.132579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:04.132969    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:04.133542    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:04.616515    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:04.616515    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:04.616515    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:04.616515    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:04.620830    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:04.620830    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:04.620830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:04.620830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:04 GMT
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Audit-Id: 207aa0ba-2430-42aa-9735-fdece6cc9c76
	I0603 14:51:04.620830    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:04.620830    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:05.116583    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:05.116583    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:05.116583    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:05.116583    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:05.120184    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:05.120184    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Audit-Id: cf44b14b-b080-42d5-b843-0853df5f75d0
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:05.120184    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:05.121219    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:05.121219    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:05.121272    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:05 GMT
	I0603 14:51:05.121525    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:05.618317    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:05.618317    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:05.618317    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:05.618317    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:05.622812    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:05.623357    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Audit-Id: 9e7b0424-b565-4d90-ac2a-e1655bac4f84
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:05.623357    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:05.623428    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:05.623428    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:05.623428    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:05 GMT
	I0603 14:51:05.623686    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.118466    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:06.118584    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:06.118639    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:06.118639    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:06.122208    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:06.122208    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:06.122208    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:06.122208    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:06.122399    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:06.122399    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:06.122399    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:06 GMT
	I0603 14:51:06.122399    9752 round_trippers.go:580]     Audit-Id: 8def6d80-1301-4d24-a58e-316862413164
	I0603 14:51:06.122594    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.620960    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:06.620960    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:06.620960    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:06.620960    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:06.624717    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:06.624717    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:06.625441    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:06.625441    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:06 GMT
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Audit-Id: ff9bdd93-67bb-4e6d-860e-dcb39944ecaf
	I0603 14:51:06.625441    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:06.625781    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:06.626350    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:07.122480    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:07.122480    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:07.122480    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:07.122698    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:07.128714    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:07.128714    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:07.128714    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:07.128714    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:07 GMT
	I0603 14:51:07.128714    9752 round_trippers.go:580]     Audit-Id: c2ca4302-b910-41e4-a837-60ae14349d6f
	I0603 14:51:07.129502    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:07.623294    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:07.623294    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:07.623294    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:07.623294    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:07.629960    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:07.629960    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:07.629960    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:07 GMT
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Audit-Id: 22181386-77ab-495e-b4d7-03be4ed61ebb
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:07.629960    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:07.629960    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:07.630612    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.124269    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:08.124269    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:08.124269    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:08.124269    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:08.129297    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:08.129297    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Audit-Id: 20eddc7c-d9ce-4fc1-babe-e5d5f39046bb
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:08.129297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:08.129297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:08.129297    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:08 GMT
	I0603 14:51:08.130039    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.622777    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:08.622837    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:08.622909    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:08.622909    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:08.626844    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:08.626844    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:08.626844    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:08.626844    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:08 GMT
	I0603 14:51:08.626844    9752 round_trippers.go:580]     Audit-Id: 8b8028ce-1346-4566-ad33-ab4ba9627375
	I0603 14:51:08.626844    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:08.627690    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:09.124919    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:09.125046    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:09.125046    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:09.125159    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:09.128418    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:09.129094    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:09 GMT
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Audit-Id: d15f1ed7-a3b5-4516-9a26-a51d7092044b
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:09.129094    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:09.129094    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:09.129094    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:09.129294    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:09.621871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:09.621958    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:09.621958    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:09.621958    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:09.626656    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:09.626760    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:09.626760    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:09.626760    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:09 GMT
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Audit-Id: bd2c293c-6b08-4375-9505-36b8c8461e69
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:09.626760    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:09.627238    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.124522    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:10.124522    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:10.124629    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:10.124629    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:10.127953    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:10.127953    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Audit-Id: 20366f6b-3218-4e16-8331-3b20ae03a1e7
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:10.127953    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:10.127953    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:10.127953    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:10 GMT
	I0603 14:51:10.129290    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.623257    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:10.623466    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:10.623466    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:10.623466    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:10.626878    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:10.626878    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:10 GMT
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Audit-Id: a7ac0590-865d-4fbf-a917-f6cfc9449896
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:10.626878    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:10.626878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:10.626878    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:10.628175    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:10.629144    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:11.127780    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:11.127780    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:11.128047    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:11.128047    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:11.134519    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:11.134519    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Audit-Id: c351e713-70c0-4e44-b397-34c65689d556
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:11.134519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:11.134519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:11.134519    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:11 GMT
	I0603 14:51:11.134519    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:11.628606    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:11.628779    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:11.628779    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:11.628779    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:11.632505    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:11.632505    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:11.632505    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:11.632505    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:11.632505    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:11 GMT
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Audit-Id: 56d235dc-a551-4b51-8981-5fdc185c4d29
	I0603 14:51:11.633256    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:11.633599    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:12.117709    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:12.117709    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:12.117709    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:12.117709    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:12.121306    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:12.121306    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:12.122187    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:12 GMT
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Audit-Id: 2b22cafe-f798-45ac-81f8-e0eecced20fb
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:12.122187    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:12.122187    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:12.123012    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:12.618185    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:12.618185    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:12.618185    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:12.618185    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:12.621852    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:12.621897    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Audit-Id: 3d136173-3187-47ca-8d9f-c3fb31ed2c4b
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:12.621897    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:12.621897    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:12.621982    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:12.621982    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:12 GMT
	I0603 14:51:12.622300    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:13.120084    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:13.120174    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:13.120174    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:13.120174    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:13.124031    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:13.124031    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:13.124031    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:13.124031    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:13 GMT
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Audit-Id: 80164430-8d7c-4c78-84e5-03c5126a06f4
	I0603 14:51:13.124031    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:13.124286    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:13.124400    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:13.125168    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:13.615525    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:13.615619    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:13.615619    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:13.615619    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:13.622213    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:13.622213    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:13.622213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:13.622213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:13 GMT
	I0603 14:51:13.622213    9752 round_trippers.go:580]     Audit-Id: 2ac43b9b-eb13-42e1-b4e9-ec54d85982f7
	I0603 14:51:13.622371    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:13.622824    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:14.116221    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:14.116534    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:14.116617    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:14.116617    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:14.120367    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:14.120648    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:14.120648    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:14 GMT
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Audit-Id: c6c4e8dd-3a89-4ea4-8e9e-4b65518e7619
	I0603 14:51:14.120648    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:14.120730    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:14.120730    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:14.121154    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:14.617882    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:14.617962    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:14.617962    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:14.617962    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:14.621411    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:14.621411    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:14.621411    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:14.622332    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:14 GMT
	I0603 14:51:14.622388    9752 round_trippers.go:580]     Audit-Id: 83922066-a321-4773-a639-2c49d96f76bd
	I0603 14:51:14.622431    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:14.622431    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:14.622431    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:14.622487    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.117969    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:15.118258    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:15.118258    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:15.118258    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:15.122117    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:15.122117    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:15.122117    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:15.122117    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:15 GMT
	I0603 14:51:15.122117    9752 round_trippers.go:580]     Audit-Id: da9b409a-372f-4bb7-a800-84762be38f6c
	I0603 14:51:15.122117    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.630457    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:15.630457    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:15.630457    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:15.630457    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:15.635439    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:15.635439    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Audit-Id: 615a36cc-a4aa-4304-82fe-5097bdc9324c
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:15.635439    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:15.635439    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:15.635439    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:15 GMT
	I0603 14:51:15.635439    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:15.636443    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:16.130187    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:16.130187    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:16.130187    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:16.130187    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:16.132839    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:16.132839    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:16.132839    9752 round_trippers.go:580]     Audit-Id: aed891a2-f0fb-44f6-a41a-352e6bd51eac
	I0603 14:51:16.132839    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:16.133854    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:16.133854    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:16.133854    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:16.133854    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:16 GMT
	I0603 14:51:16.133957    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:16.630469    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:16.630469    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:16.630469    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:16.630469    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:16.635959    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:16.636021    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:16.636021    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:16.636021    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:16 GMT
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Audit-Id: d8cd06b8-729e-4056-be4e-1ac6a386ac0d
	I0603 14:51:16.636021    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:16.636546    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:17.117038    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:17.117384    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:17.117384    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:17.117473    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:17.121811    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:17.122443    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:17.122443    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:17 GMT
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Audit-Id: f851d33e-f5db-499d-b509-383d0c6bf0d3
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:17.122443    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:17.122443    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:17.122678    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:17.621881    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:17.621881    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:17.621881    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:17.621881    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:17.625713    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:17.625713    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:17 GMT
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Audit-Id: 7789b2c2-e784-4503-96c5-fe36a1ffcd2c
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:17.626358    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:17.626358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:17.626358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:17.626889    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:18.120091    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:18.120190    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:18.120190    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:18.120190    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:18.123982    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:18.124386    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:18.124386    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:18.124386    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:18 GMT
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Audit-Id: d699f7b7-b4c5-4a22-a87b-f56f83980769
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:18.124386    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:18.124631    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:18.125141    9752 node_ready.go:53] node "multinode-720500" has status "Ready":"False"
	I0603 14:51:18.619372    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:18.619481    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:18.619481    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:18.619481    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:18.623928    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:18.623928    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:18.623928    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:18.623928    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:18 GMT
	I0603 14:51:18.623928    9752 round_trippers.go:580]     Audit-Id: 70dd0080-c668-496f-beda-cb99e58719ec
	I0603 14:51:18.624070    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:18.624070    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:18.624070    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:18.624937    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:19.116510    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:19.116876    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:19.116876    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:19.116876    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:19.123353    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:19.123353    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:19.123353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:19 GMT
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Audit-Id: 8be1ffa6-5e4c-4225-bda8-2a6480368274
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:19.123353    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:19.123353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:19.123353    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:19.617543    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:19.617543    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:19.617800    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:19.617800    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:19.621225    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:19.621225    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Audit-Id: 7684cd2f-3dcb-466d-b846-21a972f24581
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:19.621225    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:19.621225    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:19.621225    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:19 GMT
	I0603 14:51:19.622347    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:20.119385    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.119385    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.119385    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.119385    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.123664    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:20.123664    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.123664    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.123751    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.123751    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Audit-Id: 652b7f77-80fe-41cf-b711-6625ca26244c
	I0603 14:51:20.123751    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.124019    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1909","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I0603 14:51:20.619112    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.619112    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.619112    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.619112    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.622734    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:20.623166    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.623166    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.623166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.623166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.623166    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.623239    9752 round_trippers.go:580]     Audit-Id: 4fecd7ea-3a9d-4e3d-af7d-eddcb53d8ddd
	I0603 14:51:20.623239    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.623239    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:20.624216    9752 node_ready.go:49] node "multinode-720500" has status "Ready":"True"
	I0603 14:51:20.624287    9752 node_ready.go:38] duration metric: took 36.5093044s for node "multinode-720500" to be "Ready" ...
	I0603 14:51:20.624314    9752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:51:20.624410    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:20.624495    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.624495    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.624495    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.632842    9752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 14:51:20.632842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Audit-Id: 91450bf1-4ce4-4a0b-9837-3e5d395e2e6e
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.632842    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.632842    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.632842    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.634212    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1959"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87038 chars]
	I0603 14:51:20.637850    9752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:20.637850    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:20.637850    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.637850    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.638380    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.641347    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:20.641347    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.641347    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.641347    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.641347    9752 round_trippers.go:580]     Audit-Id: bb371103-81d3-4653-b86c-15dcfeb2e90e
	I0603 14:51:20.641630    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:20.642228    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:20.642284    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:20.642284    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:20.642284    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:20.643620    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:20.643620    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:20.643620    9752 round_trippers.go:580]     Audit-Id: b518e2ae-de04-49df-b025-03d350dd632b
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:20.644721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:20.644721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:20.644721    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:20 GMT
	I0603 14:51:20.644721    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:21.148350    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:21.148422    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.148422    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.148422    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.153779    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:21.153845    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.153845    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Audit-Id: 384cf0a4-0904-48bb-a05f-857e76431560
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.153845    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.153845    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.153845    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:21.154917    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:21.154989    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.154989    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.154989    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.159240    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:21.159240    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Audit-Id: aaff5379-96dc-4804-80cb-63ce111dd3cb
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.159240    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.159900    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.159900    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.159900    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.160055    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:21.646035    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:21.646261    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.646261    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.646261    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.649633    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:21.650303    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.650303    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.650303    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Audit-Id: a0ab756c-db81-4bef-a29c-5c2f16c7c946
	I0603 14:51:21.650303    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.650303    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:21.651639    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:21.651639    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:21.651738    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:21.651738    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:21.654595    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:21.654595    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:21.654595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:21.654595    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:21 GMT
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Audit-Id: 1b248904-9e68-41a8-b70b-36b890e04af6
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:21.654595    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:21.655876    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.145882    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:22.145985    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.145985    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.145985    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.150436    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:22.150436    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Audit-Id: 1f616cdf-fd49-4a1d-94b4-7be135d32db0
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.150579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.150579    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.150579    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.150860    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:22.151548    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:22.151634    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.151634    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.151634    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.155369    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:22.155369    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.155369    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.155369    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.155369    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.155456    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.155456    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.155456    9752 round_trippers.go:580]     Audit-Id: c3fbcea9-3d5f-43ee-bb0e-3e17e7ee651f
	I0603 14:51:22.155673    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.645857    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:22.645976    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.645976    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.645976    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.649459    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:22.649459    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.649459    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.649806    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Audit-Id: 0076cec9-bc97-4b1b-a213-2a22fb01e849
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.649806    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.650045    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:22.650881    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:22.650881    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:22.650881    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:22.650881    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:22.657479    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:22.657479    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:22 GMT
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Audit-Id: dd627c4f-be77-41e8-b956-a2244c41cb2b
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:22.657479    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:22.657479    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:22.657479    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:22.658044    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:22.658279    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:23.143043    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:23.143327    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.143327    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.143327    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.146769    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.146769    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.146769    9752 round_trippers.go:580]     Audit-Id: 2965518a-59e9-446f-813e-6e838b2bb701
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.147382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.147382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.147382    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.147573    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:23.148371    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:23.148480    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.148480    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.148480    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.152314    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.152314    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Audit-Id: 1286abdf-4034-4189-a2c5-3228082a5d8e
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.152314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.152314    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.152314    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.152314    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:23.648495    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:23.648570    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.648570    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.648570    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.652317    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:23.652834    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.652834    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Audit-Id: bb4b0096-aa1b-4b6a-b974-15073a793340
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.652834    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.652834    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.653145    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:23.653471    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:23.653471    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:23.653471    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:23.653471    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:23.660415    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:23.660415    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:23.660415    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:23.660415    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:23.660415    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:23.660519    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:23.660519    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:23 GMT
	I0603 14:51:23.660519    9752 round_trippers.go:580]     Audit-Id: 7a7271f0-5161-42ed-9db4-f52edff31af1
	I0603 14:51:23.660574    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.148329    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:24.148329    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.148329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.148329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.152745    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:24.153216    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.153216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.153216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.153216    9752 round_trippers.go:580]     Audit-Id: 26f0014e-defc-4c70-8525-7dce10e000e7
	I0603 14:51:24.154048    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:24.154797    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:24.154797    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.154797    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.154797    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.157641    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:24.157641    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Audit-Id: ea8e37d2-e9dd-4674-9e17-08ee0c5e2282
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.158457    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.158457    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.158457    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.158514    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.650894    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:24.650894    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.650894    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.650894    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.654862    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:24.654967    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.654967    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.654967    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.654967    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.654967    9752 round_trippers.go:580]     Audit-Id: 8d8b46f5-93f3-4a37-b03e-2d95020f7172
	I0603 14:51:24.655059    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.655059    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.655310    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:24.655960    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:24.655960    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:24.655960    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:24.655960    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:24.659616    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:24.659616    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:24.659616    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:24 GMT
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Audit-Id: d761579c-a1f5-4fcd-94d6-9a10d971e380
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:24.659616    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:24.659616    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:24.660147    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:24.660697    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:25.147282    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:25.147282    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.147353    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.147353    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.151000    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:25.151588    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Audit-Id: aff97a1d-ab63-467d-9b61-ac7e144da460
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.151588    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.151588    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.151588    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.151762    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:25.152544    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:25.152624    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.152624    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.152624    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.155216    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:25.155216    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.155216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.155216    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.155216    9752 round_trippers.go:580]     Audit-Id: 8443f917-b36c-4b81-ac73-01bd81d50672
	I0603 14:51:25.155751    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:25.647077    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:25.647163    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.647163    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.647163    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.651703    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:25.651703    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.651703    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.651703    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.651801    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Audit-Id: 1ac74dbd-3751-4813-a9b3-a72cf029807a
	I0603 14:51:25.651801    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.651864    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:25.652704    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:25.652770    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:25.652770    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:25.652770    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:25.657179    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:25.657316    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Audit-Id: caa0aa34-6b35-49e9-bded-272cbe771523
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:25.657316    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:25.657534    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:25.657566    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:25.657566    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:25 GMT
	I0603 14:51:25.658120    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:26.152409    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:26.152671    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.152671    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.152671    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.156583    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:26.156583    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Audit-Id: 49a0f896-20eb-472c-8b0e-0d3c1f83b38c
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.156670    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.156670    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.156670    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.156726    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:26.157516    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:26.157516    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.157516    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.157516    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.160111    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:26.160111    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Audit-Id: 2b9fd02f-166a-46bb-8021-4b9463b8914a
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.160111    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.160111    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.160111    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.161100    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:26.642976    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:26.642976    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.643094    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.643094    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.646465    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:26.646465    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.646465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.646465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Audit-Id: eb509d35-8b5f-400d-ad92-4bdbc2447f19
	I0603 14:51:26.646465    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.647882    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:26.648015    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:26.648605    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:26.648605    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:26.648605    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:26.650893    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:26.650893    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Audit-Id: d2cb2ffd-854b-4752-9daf-08b581750d0e
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:26.651525    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:26.651525    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:26.651525    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:26 GMT
	I0603 14:51:26.651876    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:27.141546    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:27.141546    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.141546    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.141546    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.145146    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:27.145146    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Audit-Id: f3a66e1f-f599-4b2a-805a-45e589abf079
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.145146    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.145358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.145358    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.146089    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:27.147032    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:27.147032    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.147032    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.147032    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.149830    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:27.149830    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Audit-Id: 1cecb31d-addf-4706-a4b0-ebf6781d0646
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.149956    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.149956    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.149956    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.150387    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:27.150819    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:27.645657    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:27.645657    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.645657    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.645657    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.649315    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:27.649315    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.649315    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.649315    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Audit-Id: e523d6f9-1c36-4356-a356-26ba3ddc439c
	I0603 14:51:27.649315    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.650475    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:27.651356    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:27.651413    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:27.651413    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:27.651413    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:27.654180    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:27.654278    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:27.654278    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:27.654278    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:27 GMT
	I0603 14:51:27.654278    9752 round_trippers.go:580]     Audit-Id: c4705820-06b4-4ec3-a554-93b9829efcd6
	I0603 14:51:27.654365    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:27.654860    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:28.143016    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:28.143016    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.143016    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.143016    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.147969    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:28.148007    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Audit-Id: ec09f366-faf4-452c-ae46-0ef7a2db4532
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.148007    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.148007    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.148007    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.148007    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:28.149289    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:28.149406    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.149406    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.149406    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.151665    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:28.151665    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Audit-Id: 780dc03e-047f-417a-b40f-8335453d31b3
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.151665    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.151665    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.151665    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.151665    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:28.648416    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:28.648535    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.648602    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.648602    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.654547    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:28.654604    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Audit-Id: 7bb723f2-3a20-4367-b3b3-26cca104305b
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.654604    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.654604    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.654604    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.655353    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:28.656182    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:28.656182    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:28.656182    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:28.656182    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:28.660317    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:28.661203    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Audit-Id: b485a01b-ad2c-4bfe-8699-8c97857da98c
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:28.661203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:28.661203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:28.661203    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:28 GMT
	I0603 14:51:28.661376    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:29.150213    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:29.150213    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.150213    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.150213    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.153798    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.153798    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Audit-Id: ef433026-9f16-45d4-a8a5-0e5967f2f372
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.153798    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.154739    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.154739    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.154739    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.154922    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:29.155648    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:29.155648    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.155648    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.155648    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.157517    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:29.158249    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Audit-Id: ca154fcc-9b68-49e0-94ee-472316b55932
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.158249    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.158351    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.158382    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.158382    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.158645    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:29.159152    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:29.648244    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:29.648409    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.648486    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.648486    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.652614    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.652614    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Audit-Id: e92bb439-6918-4bb1-abd0-70e21c07a802
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.652614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.652614    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.652614    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.652978    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:29.653311    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:29.653311    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:29.653311    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:29.653311    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:29.657023    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:29.657023    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:29.657023    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:29.657102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:29.657102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:29 GMT
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Audit-Id: 5e2543e9-d088-4782-85de-43f615b3b5d1
	I0603 14:51:29.657102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:29.657558    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:30.147318    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:30.147318    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.147318    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.147318    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.150906    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:30.151543    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.151543    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.151543    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.151543    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.151543    9752 round_trippers.go:580]     Audit-Id: f9ab0c38-f0ee-45f1-b445-e4ac54a42425
	I0603 14:51:30.151683    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.151683    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.152276    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:30.153024    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:30.153157    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.153157    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.153157    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.156895    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:30.157027    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Audit-Id: 4997e132-5510-4f10-ab88-fe85a144e703
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.157027    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.157027    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.157076    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.157480    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:30.648887    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:30.649073    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.649073    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.649073    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.653502    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:30.653502    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.653502    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.653502    9752 round_trippers.go:580]     Audit-Id: a01fa2fc-8243-4dcc-b5f7-5cb035420923
	I0603 14:51:30.653726    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.653726    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.653726    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.653726    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.654183    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:30.655094    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:30.655094    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:30.655094    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:30.655209    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:30.657307    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:30.657307    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Audit-Id: b1ff48fd-45c8-484c-b75f-ce51a5a1c0e0
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:30.657307    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:30.657307    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:30.657307    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:30 GMT
	I0603 14:51:30.657719    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:31.152843    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:31.152843    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.152843    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.152843    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.158166    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:31.158166    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Audit-Id: e4a5e6ab-b5f6-4355-8954-072dc4f66296
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.158166    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.158166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.158166    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.158166    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:31.159126    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:31.159192    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.159192    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.159192    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.161456    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:31.161456    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.161456    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Audit-Id: e841f044-67e7-4589-bfd1-b177e9b2764d
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.161456    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.162297    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.162994    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:31.163545    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:31.638468    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:31.638531    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.638531    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.638531    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.645620    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:51:31.645620    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.645620    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.645620    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.645620    9752 round_trippers.go:580]     Audit-Id: 16a229e4-e76a-46ea-aabd-fc722bc1f0b0
	I0603 14:51:31.645620    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:31.646663    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:31.646663    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:31.646663    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:31.646663    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:31.649251    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:31.649952    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:31 GMT
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Audit-Id: f9fbf51a-5f0e-402e-989c-d3b40a842fce
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:31.649952    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:31.649952    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:31.649952    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:31.650263    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:32.152429    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:32.152429    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.152429    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.152429    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.157040    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:32.157040    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Audit-Id: 31ae400a-f025-4d3a-8b55-ad2774a9279b
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.157040    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.157040    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.157040    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.157040    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:32.158207    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:32.158207    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.158207    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.158207    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.161270    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:32.161270    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.161270    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Audit-Id: deb4bab4-c395-4a96-b2f1-4d558a2c5618
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.161920    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.161920    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.161920    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:32.649759    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:32.649759    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.649759    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.649759    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.655609    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:32.656046    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.656046    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.656046    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.656046    9752 round_trippers.go:580]     Audit-Id: 31371580-5bdb-43a5-b7e9-b9daa5d0fad8
	I0603 14:51:32.656297    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:32.657117    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:32.657189    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:32.657189    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:32.657189    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:32.659616    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:32.660536    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:32.660536    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:32.660536    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:32 GMT
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Audit-Id: 24ebbae7-d7dd-447a-9f61-66e8fc940940
	I0603 14:51:32.660536    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:32.660536    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.146474    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:33.146665    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.146665    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.146732    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.151962    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:33.151962    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.152048    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.152048    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Audit-Id: ef1e3b02-c660-4b2a-9f9b-06403132b44f
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.152048    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.152307    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:33.153135    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:33.153192    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.153192    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.153192    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.159425    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:33.159425    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.159830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.159830    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.159830    9752 round_trippers.go:580]     Audit-Id: 2dfff49a-d1b9-4bc4-a8ae-8efd67816c3c
	I0603 14:51:33.160197    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.645010    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:33.645010    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.645010    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.645010    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.647814    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:33.648728    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.648793    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.648835    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.648835    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.648835    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.648835    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.648866    9752 round_trippers.go:580]     Audit-Id: 888366a0-5d71-4b32-96b8-2791019c6de9
	I0603 14:51:33.648866    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:33.649536    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:33.649536    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:33.649536    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:33.649536    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:33.654224    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:33.654224    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:33.654224    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:33.654283    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:33.654283    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:33 GMT
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Audit-Id: 543a9291-ece9-43cf-802c-b5b1b43b9bf7
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:33.654305    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:33.654573    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:33.654573    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:34.146636    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:34.146884    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.146884    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.146884    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.150944    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:34.151816    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.151816    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Audit-Id: e89babfd-d75a-4f66-8863-8196832f6316
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.151868    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.151901    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.151901    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:34.153150    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:34.153182    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.153182    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.153182    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.156115    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:34.156115    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Audit-Id: c63d2ebc-a699-42b7-ab14-e3d33a0f6131
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.156115    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.156115    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.156115    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.157418    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:34.646489    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:34.646680    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.646680    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.646680    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.651068    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:34.651206    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.651206    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.651206    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Audit-Id: 74365199-606c-47fa-b935-79592359b1df
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.651206    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.651437    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:34.652539    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:34.652610    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:34.652610    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:34.652610    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:34.655874    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:34.655874    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Audit-Id: 657dc101-8239-4709-ae75-76c4363e0595
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:34.655874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:34.655874    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:34.655874    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:34 GMT
	I0603 14:51:34.656313    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.147440    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:35.147440    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.147440    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.147440    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.152048    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:35.152226    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.152226    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.152226    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Audit-Id: 83a6614c-2b20-4bb2-8f5b-0bf861852361
	I0603 14:51:35.152226    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.153034    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:35.153807    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:35.153866    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.153866    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.153866    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.156609    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:35.156609    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.157053    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.157053    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.157053    9752 round_trippers.go:580]     Audit-Id: 2129a726-1428-4d13-afa4-4183f98ee26d
	I0603 14:51:35.157053    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.645521    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:35.645521    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.645521    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.645521    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.649431    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:35.649495    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.649659    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.649722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.649722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.649722    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.649722    9752 round_trippers.go:580]     Audit-Id: 4ba95eab-4df5-440b-a92a-684895aaf0cc
	I0603 14:51:35.649850    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.649918    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:35.650572    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:35.650572    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:35.650572    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:35.650572    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:35.657229    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:35.657229    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Audit-Id: 3829156e-8895-485e-9994-a26645615f68
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:35.657229    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:35.657229    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:35.657229    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:35 GMT
	I0603 14:51:35.657229    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:35.657957    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:36.144916    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:36.145142    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.145142    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.145142    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.148488    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:36.148970    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Audit-Id: 1432260b-62ac-45a7-8402-a63a48acdd20
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.148970    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.149214    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.149214    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.149214    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.149470    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:36.150842    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:36.150943    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.150943    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.150943    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.153332    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:36.153332    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.154559    9752 round_trippers.go:580]     Audit-Id: e6a6add3-ab12-4ffc-a53c-abcc1c3b74f2
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.154598    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.154598    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.154598    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.154887    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:36.645997    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:36.646225    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.646225    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.646225    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.651600    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:36.651600    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.651600    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.651600    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Audit-Id: 2813be50-e0ea-44f5-ad2b-c24caed18ecf
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.651721    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.651721    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.651872    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:36.652561    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:36.652722    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:36.652722    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:36.652722    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:36.654461    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:36.655383    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:36.655383    9752 round_trippers.go:580]     Audit-Id: 1cd78818-f09f-408b-b2af-c8baf3d12c9b
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:36.655444    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:36.655444    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:36.655444    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:36 GMT
	I0603 14:51:36.655834    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:37.142181    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:37.142181    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.142181    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.142181    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.146768    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:37.146768    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Audit-Id: c27f7aaf-75a3-471f-9f63-4e3361ee29f4
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.146768    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.146768    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.146768    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.147772    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:37.148871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:37.148871    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.148947    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.148947    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.151795    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:37.152264    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Audit-Id: e770e9e4-ecdf-4d56-b48e-22ad0963b5db
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.152264    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.152264    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.152264    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.152264    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:37.642441    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:37.642441    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.642575    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.642575    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.645935    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:37.646886    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.646886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Audit-Id: 5da79e4c-8a98-4eec-8efa-fe1e1c93a034
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.646886    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.646886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.647194    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:37.647998    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:37.648086    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:37.648086    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:37.648086    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:37.651295    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:37.651295    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:37.651295    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:37 GMT
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Audit-Id: f219361f-a093-4450-b077-ad6079309455
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:37.651295    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:37.651295    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:37.652265    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:38.140740    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:38.140740    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.140740    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.140740    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.144377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:38.144377    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Audit-Id: fc5e0d3f-abae-49fd-a077-f03f17c5d595
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.144377    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.144890    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.144890    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.145029    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:38.145432    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:38.145432    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.145432    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.145432    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.150746    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:38.150746    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Audit-Id: cb06cedd-b185-4f75-9516-5245ce271c09
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.150746    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.150746    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.150746    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.151513    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:38.152079    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:38.640296    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:38.640296    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.640296    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.640296    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.645203    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:38.645203    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Audit-Id: cb6d63fc-9765-4bd4-88d5-111a297874fe
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.645203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.645203    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.645203    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.645203    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:38.648274    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:38.648341    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:38.648341    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:38.648341    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:38.651612    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:38.651647    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:38.651647    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:38 GMT
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Audit-Id: 78b86873-cd0e-4fc9-a129-94981a2e8fc3
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:38.651685    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:38.651685    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:38.651685    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:38.652358    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:39.146667    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:39.146913    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.146913    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.146913    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.151345    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:39.151424    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Audit-Id: e472a206-dd76-419d-984d-062d301fa34c
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.151424    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.151424    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.151424    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.152793    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:39.153562    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:39.153562    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.153562    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.153643    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.159010    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:39.159010    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Audit-Id: a8377e39-c1ca-4ed9-922a-3e05bc2048a4
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.159010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.159010    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.159010    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.159631    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:39.649304    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:39.649304    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.649304    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.649304    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.653828    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:39.653890    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.653954    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.653954    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Audit-Id: ac2886e3-9c4e-4cc0-b4b6-9b790560edd1
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.653954    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.653954    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:39.654983    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:39.654983    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:39.654983    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:39.654983    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:39.658561    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:39.658753    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Audit-Id: 5da870b9-59d5-4c71-82cf-e889187d4ad5
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:39.658753    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:39.658753    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:39.658753    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:39 GMT
	I0603 14:51:39.659320    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:40.147435    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:40.147435    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.147545    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.147545    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.150230    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:40.151354    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.151354    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.151354    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Audit-Id: 363afa80-b5a2-4062-ac01-b085038fb402
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.151354    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.151354    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:40.152093    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:40.152093    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.152093    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.152093    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.155143    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:40.155143    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.155143    9752 round_trippers.go:580]     Audit-Id: 3dea9a9b-c0ff-40af-87db-8cc4da665dd4
	I0603 14:51:40.155143    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.155678    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.155678    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.155678    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.155773    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.155773    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:40.156650    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:40.645557    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:40.645557    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.645557    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.645557    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.649404    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:40.649404    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.649404    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Audit-Id: 547100ae-3316-4b6f-8108-fe657f2fe507
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.649404    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.649886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.650753    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:40.651511    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:40.651511    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:40.651511    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:40.651511    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:40.654712    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:40.654712    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Audit-Id: c1efd975-cb42-4093-9c06-d489ccf04bbf
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:40.654777    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:40.654777    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:40.654777    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:40 GMT
	I0603 14:51:40.655247    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:41.144992    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:41.144992    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.144992    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.144992    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.149643    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:41.149643    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.149778    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.149778    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Audit-Id: 19d53dca-c102-4494-865f-05614e5d2c57
	I0603 14:51:41.149778    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.150016    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:41.150864    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:41.150864    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.150864    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.150864    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.158020    9752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 14:51:41.158174    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.158174    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Audit-Id: 52846a85-8338-4b55-8f09-f4d58933ff1f
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.158174    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.158244    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.158576    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:41.649534    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:41.649608    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.649608    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.649608    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.653119    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:41.653593    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Audit-Id: 0efaf407-2ccf-4da6-a5ae-f9ab2f785867
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.653593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.653593    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.653593    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.653911    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:41.654564    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:41.654564    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:41.654794    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:41.654794    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:41.657924    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:41.657924    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Audit-Id: 7b8ce6ec-03e7-41e2-b909-7ca6b1113fd7
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:41.657924    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:41.657924    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:41.657924    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:41.658474    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:41 GMT
	I0603 14:51:41.658717    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:42.149340    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:42.149552    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.149552    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.149552    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.152790    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:42.153791    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Audit-Id: f384d4fd-a218-496a-a9f7-a68c1290ab6d
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.153841    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.153841    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.153841    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.154242    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:42.155166    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:42.155166    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.155239    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.155239    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.157805    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:42.158855    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.158855    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.158914    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.158914    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.158914    9752 round_trippers.go:580]     Audit-Id: a08dc58a-15d6-4ada-9270-a4b9d6a0f773
	I0603 14:51:42.159227    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:42.160405    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:42.650022    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:42.650022    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.650144    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.650144    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.653439    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:42.653919    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Audit-Id: f52474b9-bea2-472c-a70e-1369afca95c2
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.653919    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.653919    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.653919    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.654435    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:42.655949    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:42.655949    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:42.655949    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:42.655949    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:42.658545    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:42.659101    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:42.659101    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:42.659101    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:42 GMT
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Audit-Id: 4737e533-bb03-4ba1-9e49-6fe2edacd8b9
	I0603 14:51:42.659101    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:42.659605    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:43.148213    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:43.148290    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.148290    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.148290    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.152886    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:43.152975    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Audit-Id: 92ddfaf5-575f-45e1-ae35-157abe919f3c
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.152975    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.152975    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.152975    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.153560    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:43.154404    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:43.154516    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.154516    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.154516    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.160962    9752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 14:51:43.160962    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Audit-Id: 79a57e35-ac7e-4f5f-b65b-60d3913b5cc8
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.160962    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.160962    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.160962    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.160962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:43.648921    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:43.648921    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.648921    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.648921    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.652517    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:43.652517    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.652517    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.652517    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.653538    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.653538    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.653538    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.653538    9752 round_trippers.go:580]     Audit-Id: 3f66c874-2fa3-43e7-a773-d4fc95779033
	I0603 14:51:43.653778    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:43.654648    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:43.654720    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:43.654720    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:43.654720    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:43.656881    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:43.656881    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:43.656881    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:43.656881    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:43.657348    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:43.657348    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:43.657348    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:43 GMT
	I0603 14:51:43.657348    9752 round_trippers.go:580]     Audit-Id: 9ca51f00-d57e-47ad-a79c-aff8c9fed510
	I0603 14:51:43.657348    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.148819    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:44.148819    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.148819    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.148819    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.152529    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:44.152944    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.152944    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.152944    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Audit-Id: 622a3778-a063-4eb1-944f-fda8b28c0893
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.152944    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.152944    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:44.153932    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:44.153932    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.153932    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.153932    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.156525    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:44.156898    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Audit-Id: 328006eb-b542-42af-96e2-c20e0fbd062d
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.156898    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.156898    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.156898    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.157001    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.653062    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:44.653062    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.653062    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.653062    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.655872    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:44.655872    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Audit-Id: f0dc7fd0-5844-4ae2-bd7f-2be61b390952
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.655872    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.655872    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.655872    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.656865    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:44.657871    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:44.658876    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:44.658876    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:44.658876    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:44.660868    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:44.661876    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:44.661947    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:44 GMT
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Audit-Id: c2925120-9bce-4829-940a-51adb032f50d
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:44.661947    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:44.661947    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:44.662387    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:44.662918    9752 pod_ready.go:102] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"False"
	I0603 14:51:45.145061    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:45.145284    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.145329    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.145329    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.149788    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.149788    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Audit-Id: 09a22687-e4bd-4626-94bd-74eb48db54fe
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.149788    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.149788    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.149788    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.149788    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:45.150769    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:45.150769    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.150769    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.150769    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.155522    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.155522    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.155522    9752 round_trippers.go:580]     Audit-Id: 42d83b26-67e8-4e04-86af-dc159d2a6a7c
	I0603 14:51:45.155522    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.155635    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.155635    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.155635    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.155635    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.155962    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:45.650063    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:45.650255    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.650255    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.650255    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.654990    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:45.655277    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Audit-Id: fd5a6325-fa62-4ea2-990a-78573cffa89f
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.655277    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.655277    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.655277    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.655727    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1810","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0603 14:51:45.656053    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:45.656053    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:45.656053    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:45.656053    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:45.661722    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:45.661722    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:45.661722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:45 GMT
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Audit-Id: ea8fd5f0-d0ad-457d-ae04-ea13e401b8b6
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:45.661722    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:45.661722    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:45.662386    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.143779    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9wpc
	I0603 14:51:46.143779    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.143955    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.143955    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.148213    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.148213    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Audit-Id: 40fec9f6-64f8-49df-b38f-8e1048f437c6
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.148213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.148213    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.148213    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.148213    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0603 14:51:46.150494    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.150494    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.151605    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.151731    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.156353    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.156353    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.156353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.156353    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Audit-Id: 3292039c-fb70-4251-9078-30ff9b5804c5
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.156353    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.157173    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.157957    9752 pod_ready.go:92] pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.158001    9752 pod_ready.go:81] duration metric: took 25.5199418s for pod "coredns-7db6d8ff4d-c9wpc" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.158100    9752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.158227    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-720500
	I0603 14:51:46.158306    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.158306    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.158306    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.163555    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:46.163555    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Audit-Id: b0d3c4dc-1d33-4c77-9d43-e4f8aa732a7a
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.163555    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.163555    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.163555    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.164117    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-720500","namespace":"kube-system","uid":"1a2533a2-16e9-4696-9694-186579c52b55","resourceVersion":"1922","creationTimestamp":"2024-06-03T14:50:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.22.154.20:2379","kubernetes.io/config.hash":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.mirror":"7a9c45e53018cd74c5a13ccfd96f1479","kubernetes.io/config.seen":"2024-06-03T14:50:33.894763922Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0603 14:51:46.164319    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.164319    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.164319    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.164319    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.167779    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.167779    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.167779    9752 round_trippers.go:580]     Audit-Id: 85afce63-3f0b-48c9-b565-c3e87f6b41a5
	I0603 14:51:46.167779    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.168754    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.168754    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.168754    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.168754    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.169125    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.169559    9752 pod_ready.go:92] pod "etcd-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.169593    9752 pod_ready.go:81] duration metric: took 11.4921ms for pod "etcd-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.169593    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.169731    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-720500
	I0603 14:51:46.169731    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.169731    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.169731    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.173842    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.173842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.174035    9752 round_trippers.go:580]     Audit-Id: ba0a55f5-adda-4d8a-8a83-78e87e186a38
	I0603 14:51:46.174035    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.174096    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.174096    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.174096    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.174096    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.174482    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-720500","namespace":"kube-system","uid":"b27b9256-3c5b-4432-8a9e-ebe5303b88f0","resourceVersion":"1921","creationTimestamp":"2024-06-03T14:50:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.22.154.20:8443","kubernetes.io/config.hash":"a9aa17bec6c8b90196f8771e2e5c6391","kubernetes.io/config.mirror":"a9aa17bec6c8b90196f8771e2e5c6391","kubernetes.io/config.seen":"2024-06-03T14:50:33.891701119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:50:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0603 14:51:46.174970    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.174970    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.174970    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.174970    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.178842    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.178842    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Audit-Id: e9e1c594-e8d8-40ca-a592-38a75e8f6844
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.178842    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.179103    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.179103    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.179142    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.179562    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.180004    9752 pod_ready.go:92] pod "kube-apiserver-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.180036    9752 pod_ready.go:81] duration metric: took 10.4432ms for pod "kube-apiserver-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.180036    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.180148    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-720500
	I0603 14:51:46.180148    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.180148    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.180214    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.184993    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.184993    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Audit-Id: e5b5f9b2-83e2-4ecd-8d32-16a9687f41ed
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.185112    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.185112    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.185112    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.185683    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-720500","namespace":"kube-system","uid":"6ba9c1e5-75bb-4731-9105-49acbbf3f237","resourceVersion":"1895","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.mirror":"78d1bd07ad8cdd8611c0b5d7e797ef30","kubernetes.io/config.seen":"2024-06-03T14:27:18.382156638Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0603 14:51:46.186449    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.186449    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.186561    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.186561    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.189195    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.189195    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.189195    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.189195    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Audit-Id: c71eb95a-f485-4124-93a1-1a8d60332f39
	I0603 14:51:46.189195    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.189195    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.190315    9752 pod_ready.go:92] pod "kube-controller-manager-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.190418    9752 pod_ready.go:81] duration metric: took 10.3825ms for pod "kube-controller-manager-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.190418    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.190576    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-64l9x
	I0603 14:51:46.190604    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.190604    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.190651    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.192948    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.193784    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.193784    9752 round_trippers.go:580]     Audit-Id: a9646396-f5ed-4dd3-b273-572387c0ca82
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.193820    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.193820    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.193820    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.194130    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-64l9x","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a","resourceVersion":"1822","creationTimestamp":"2024-06-03T14:27:32Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0603 14:51:46.194863    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:46.194913    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.194913    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.194942    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.197711    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:46.198044    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.198102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.198102    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.198102    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.198179    9752 round_trippers.go:580]     Audit-Id: 9e308880-46f9-4d35-9c6b-5bc2a1e05f62
	I0603 14:51:46.198420    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:46.198906    9752 pod_ready.go:92] pod "kube-proxy-64l9x" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:46.198906    9752 pod_ready.go:81] duration metric: took 8.4376ms for pod "kube-proxy-64l9x" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.198952    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.355306    9752 request.go:629] Waited for 156.1037ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:51:46.355497    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ctm5l
	I0603 14:51:46.355497    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.355497    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.355497    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.360270    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.360395    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.360395    9752 round_trippers.go:580]     Audit-Id: 29fca7d7-17f8-4079-ab18-828e0b70fc18
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.360458    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.360458    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.360458    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.360794    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ctm5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"38069b1b-8ba9-46af-b4e7-7add5d9c67fc","resourceVersion":"1761","creationTimestamp":"2024-06-03T14:35:14Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:51:46.556036    9752 request.go:629] Waited for 194.6279ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:51:46.556198    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m03
	I0603 14:51:46.556198    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.556198    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.556198    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.560242    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:46.560340    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Audit-Id: 4ca8984b-d722-40a4-9174-94b0ce70bc9b
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.560340    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.560340    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.560340    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.560632    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m03","uid":"daf03ea9-c0d0-4565-9ad8-44cd4fce8e19","resourceVersion":"1970","creationTimestamp":"2024-06-03T14:46:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_46_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0603 14:51:46.560789    9752 pod_ready.go:97] node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:51:46.560789    9752 pod_ready.go:81] duration metric: took 361.8334ms for pod "kube-proxy-ctm5l" in "kube-system" namespace to be "Ready" ...
	E0603 14:51:46.560789    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m03" hosting pod "kube-proxy-ctm5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m03" has status "Ready":"Unknown"
	I0603 14:51:46.560789    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:46.757822    9752 request.go:629] Waited for 196.3304ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:51:46.758118    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sm9rr
	I0603 14:51:46.758240    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.758240    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.758240    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.762063    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.762063    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.762063    9752 round_trippers.go:580]     Audit-Id: bb4d5d35-12de-46d3-8273-2f23908ac552
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.762147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.762147    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.762147    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.762203    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sm9rr","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f0321c0-f47d-463e-bda2-919f37735748","resourceVersion":"1786","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"463002dd-988d-4917-84c4-5103363716bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"463002dd-988d-4917-84c4-5103363716bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0603 14:51:46.945454    9752 request.go:629] Waited for 182.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:51:46.945555    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500-m02
	I0603 14:51:46.945748    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:46.945748    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:46.945748    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:46.949377    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:46.950155    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:46 GMT
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Audit-Id: 3a16cf6b-1037-4663-873c-2ae7d060f122
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:46.950155    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:46.950155    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:46.950155    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:46.950535    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500-m02","uid":"06afa94a-e6df-4bb6-9f0c-9ec96714199b","resourceVersion":"1974","creationTimestamp":"2024-06-03T14:30:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_03T14_30_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0603 14:51:46.950662    9752 pod_ready.go:97] node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:51:46.950662    9752 pod_ready.go:81] duration metric: took 389.8701ms for pod "kube-proxy-sm9rr" in "kube-system" namespace to be "Ready" ...
	E0603 14:51:46.950662    9752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-720500-m02" hosting pod "kube-proxy-sm9rr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-720500-m02" has status "Ready":"Unknown"
	I0603 14:51:46.950662    9752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:47.147455    9752 request.go:629] Waited for 195.9967ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:51:47.147455    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-720500
	I0603 14:51:47.147455    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:47.147455    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:47.147699    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:47.151484    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:47.151836    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Audit-Id: a2cd29d3-5dc1-4a57-bc2c-88c9819db781
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:47.151836    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:47.151836    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:47.151836    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:47 GMT
	I0603 14:51:47.151997    9752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-720500","namespace":"kube-system","uid":"9d420d28-dde0-4504-a4d4-f840cab56ebe","resourceVersion":"1826","creationTimestamp":"2024-06-03T14:27:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.mirror":"f58e384885de6f2352fb028e836ba47f","kubernetes.io/config.seen":"2024-06-03T14:27:18.382157538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0603 14:51:47.350883    9752 request.go:629] Waited for 198.4263ms due to client-side throttling, not priority and fairness, request: GET:https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:47.350950    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes/multinode-720500
	I0603 14:51:47.351036    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:47.351036    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:47.351036    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:47.353781    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:47.354681    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Audit-Id: 37c5c085-959b-46dc-8592-739e003d4822
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:47.354681    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:47.354681    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:47.354681    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:47.354770    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:47 GMT
	I0603 14:51:47.354897    9752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-03T14:27:15Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0603 14:51:47.355424    9752 pod_ready.go:92] pod "kube-scheduler-multinode-720500" in "kube-system" namespace has status "Ready":"True"
	I0603 14:51:47.355536    9752 pod_ready.go:81] duration metric: took 404.8703ms for pod "kube-scheduler-multinode-720500" in "kube-system" namespace to be "Ready" ...
	I0603 14:51:47.355567    9752 pod_ready.go:38] duration metric: took 26.7310044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 14:51:47.355567    9752 api_server.go:52] waiting for apiserver process to appear ...
	I0603 14:51:47.366477    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:47.390112    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:47.390231    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:47.402409    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:47.433941    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:47.433941    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:47.450044    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:47.477890    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:47.477890    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:47.478605    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:47.486174    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:47.513873    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:47.513873    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:47.513873    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:47.523710    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:47.543960    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:47.543960    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:47.545185    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:47.554161    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:47.578871    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:47.578871    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:47.578871    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:47.588695    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:47.611240    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:47.611240    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:47.611817    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:47.611874    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:47.611874    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:47.633367    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:47.633466    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:47.633540    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:47.633606    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:47.633606    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:47.633681    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:47.633681    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:47.633815    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:47.633841    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:47.633841    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:47.635804    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:47.635804    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:47.664976    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:47.665203    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:47.665324    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.665324    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:47.665449    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:47.665449    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:47.665525    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:47.665593    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:47.665593    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:47.665655    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665700    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:47.665700    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665767    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665809    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:47.665858    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:47.665875    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665875    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.665875    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:47.665965    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.665992    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:47.665992    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:47.666028    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:47.666028    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:47.666028    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:47.666085    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:47.666085    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:47.666107    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:47.666133    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:47.666683    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:47.666683    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:47.666779    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:47.666779    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:47.666888    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:47.666965    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:47.667023    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:47.667045    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:47.667045    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:47.667073    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:47.667671    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:47.667671    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:47.667794    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:47.667820    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:47.667856    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:47.667856    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:47.667918    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:47.667918    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:47.667957    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:47.667957    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:47.667989    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:47.674761    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:47.675398    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:47.699402    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:47.700429    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:47.701401    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:47.707397    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:47.707397    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:47.731404    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:47.732436    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:47.734017    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:47.735012    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.778019    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.779023    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:47.780012    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:47.781012    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.782021    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.783021    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:47.784009    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:47.801018    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:47.801018    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:47.861010    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:47.861010    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:47.861010    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:47.861010    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:47.861010    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:47.861010    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:47.861010    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:47.861010    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:47.861010    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:47.862025    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:47.862025    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:47.862025    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:47.862025    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:47.862025    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:47.864009    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:47.864009    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:47.892099    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:47.893093    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:47.894095    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.895206    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:47.896115    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:47.897102    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.898094    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.899096    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:47.900095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:47.901095    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:47.943732    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:47.943732    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:47.980438    9752 command_runner.go:130] > .:53
	I0603 14:51:47.980438    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:47.980438    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:47.980438    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:47.980438    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:47.981455    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:47.981455    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:48.005404    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:48.007829    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:48.007829    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:48.041752    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.041752    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:48.041865    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.041943    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:48.042041    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:48.042071    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:48.042605    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:48.042747    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.042774    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.042835    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:48.042857    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:48.042857    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:48.042902    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:48.042902    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:48.042924    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.042924    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:48.042990    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:48.043074    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:48.043127    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:48.043159    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:48.043224    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:48.043224    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:48.043760    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:48.043908    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:48.043974    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:48.043989    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:48.044044    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:48.044066    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:48.044092    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.045755    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:48.045888    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:48.046079    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:48.046079    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:48.046264    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:48.046476    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:48.046476    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:48.046551    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:48.046627    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:48.046749    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:48.047258    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:48.047440    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:48.047509    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:48.047509    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:48.047611    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:48.047686    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:48.047704    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:48.047771    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:48.047850    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:48.047910    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:48.048016    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:48.048040    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:48.048040    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:48.048067    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:48.048609    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.048795    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.048869    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.049446    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:48.069847    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:48.069847    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:48.317783    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:48.317849    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:48.317849    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:48.317912    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.317974    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:48.318084    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:48.318105    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.318105    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:48.318127    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:48.318158    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.318158    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.318158    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.318158    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:48.318158    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:48.318158    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.318158    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:48.318158    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.318158    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:40 +0000
	I0603 14:51:48.318158    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:48.318158    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:48.318158    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:48.318158    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:48.318158    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:48.318158    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.318158    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.318158    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.318158    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.318158    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.318158    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.318158    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:48.318158    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.318158    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.318158    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.318702    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.318702    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.318702    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:48.318762    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:48.318762    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:48.318762    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.318762    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.318762    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	I0603 14:51:48.318857    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0603 14:51:48.318922    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	I0603 14:51:48.318945    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:48.318974    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.318974    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:48.318974    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:48.318974    9752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:48.318974    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:48.318974    9752 command_runner.go:130] > Events:
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:48.318974    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.318974    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:48.319527    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:48.319527    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:48.319527    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.319571    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:48.319685    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:48.319685    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.319788    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.319811    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.319839    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.319839    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:48.319839    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:48.319839    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:48.319839    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.319839    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:48.319839    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.319839    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:48.319839    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:48.319839    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.319839    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:48.319839    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.319839    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.319839    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.319839    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:48.319839    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.319839    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.319839    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:48.319839    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:48.319839    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:48.319839    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.319839    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.319839    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:48.320371    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0603 14:51:48.320371    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0603 14:51:48.320429    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.320429    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:48.320429    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:48.320429    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:48.320429    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:48.320548    9752 command_runner.go:130] > Events:
	I0603 14:51:48.320548    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:48.320548    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:48.320548    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:48.320616    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:48.320641    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  NodeNotReady             3m41s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:48.320671    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:48.320671    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:48.320671    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:48.320671    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:48.320671    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:48.320671    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:48.320671    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:48.320671    9752 command_runner.go:130] > Lease:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:48.320671    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:48.320671    9752 command_runner.go:130] > Conditions:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:48.320671    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:48.320671    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:48.320671    9752 command_runner.go:130] > Addresses:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:48.320671    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:48.320671    9752 command_runner.go:130] > Capacity:
	I0603 14:51:48.320671    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.321203    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.321203    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.321203    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.321260    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:48.321260    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:48.321260    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:48.321260    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:48.321260    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:48.321381    9752 command_runner.go:130] > System Info:
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:48.321381    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:48.321381    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:48.321381    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:48.321456    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:48.321456    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:48.321523    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:48.321538    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:48.321554    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 14:51:48.321554    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 14:51:48.321554    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:48.321554    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:48.321554    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:48.321554    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:48.321554    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:48.321554    9752 command_runner.go:130] > Events:
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:48.321554    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  NodeNotReady             4m1s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:48.321554    9752 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:48.331127    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:48.331127    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:48.370757    9752 command_runner.go:130] > .:53
	I0603 14:51:48.370757    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:48.370899    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:48.370899    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:48.370899    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:48.370970    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:48.370970    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:48.371031    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:48.371100    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:48.371100    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:48.371148    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:48.371148    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:48.371223    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:48.371246    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:48.371274    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:48.374748    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:48.374804    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:48.402994    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.403181    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:48.403181    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.403379    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:48.403459    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.403544    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:48.403609    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:48.403634    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.403663    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.403663    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.406232    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:48.406232    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.437916    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.437916    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.438460    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.438460    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438511    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438552    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438552    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.438611    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439158    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439226    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439377    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439377    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439472    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.439493    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:48.439531    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.439570    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:48.439615    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:48.439615    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:48.439615    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:48.450559    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:48.450559    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:48.479513    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:48.479513    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:48.479585    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:48.479665    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:48.479729    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:48.479729    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:48.479729    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:48.479791    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:48.479813    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:48.479855    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:48.479915    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:48.479978    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:48.480002    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:48.480027    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:48.480027    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:48.480086    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:48.480166    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:48.480193    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:48.480722    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:48.480815    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:48.480815    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:48.480870    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:48.480930    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:48.480968    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:48.481003    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:48.481077    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:48.481598    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:48.481773    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:48.481807    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:48.481807    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:48.481842    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:48.481842    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:48.482389    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:48.482389    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:48.482464    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:48.482519    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:48.482609    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:48.482637    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:48.482727    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:48.483259    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:48.483312    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:48.483415    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:48.483415    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:48.483452    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:48.483615    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:48.483693    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:48.483778    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:48.483852    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:48.483876    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:48.483938    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:48.483990    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:48.484024    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.484024    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:48.484061    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:48.484061    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:48.484099    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:48.484164    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:48.499438    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:48.499438    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:48.525450    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:48.525450    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:48.525450    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:48.526146    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:48.526592    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.526860    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.526962    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:48.527032    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527032    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527097    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:48.527148    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:48.530231    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:48.530231    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:48.563600    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.564611    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:48.565599    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.566631    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:48.567601    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.569608    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.570606    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:48.571598    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.129614    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:51:51.160093    9752 command_runner.go:130] > 1877
	I0603 14:51:51.160219    9752 api_server.go:72] duration metric: took 1m7.3707328s to wait for apiserver process to appear ...
	I0603 14:51:51.160324    9752 api_server.go:88] waiting for apiserver healthz status ...
	I0603 14:51:51.170922    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:51.193114    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:51.193114    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:51.203521    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:51.224331    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:51.225818    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:51.235814    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:51.256783    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:51.257489    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:51.258733    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:51.268752    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:51.289154    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:51.290275    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:51.290327    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:51.299288    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:51.328267    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:51.328799    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:51.328924    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:51.339766    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:51.364195    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:51.364195    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:51.364195    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:51.374860    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:51.398493    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:51.398493    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:51.398493    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:51.398493    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:51.398493    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:51.422342    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:51.423387    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:51.423689    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:51.423689    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:51.423746    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.423783    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.423920    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:51.423920    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.423954    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:51.423954    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.423975    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.423975    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.424017    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:51.424055    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.424055    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.424107    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:51.424140    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:51.424186    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:51.424186    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:51.429915    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:51.429915    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:51.460186    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460186    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:51.460818    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.460917    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.460997    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461070    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:51.461661    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:51.461764    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:51.461764    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:51.461822    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:51.461891    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:51.461979    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462066    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:51.462239    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:51.462334    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:51.462422    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:51.462524    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462558    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:51.462625    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.462707    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:51.462791    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:51.462871    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:51.462949    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:51.462949    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:51.463028    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:51.463106    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:51.463106    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:51.463185    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:51.463263    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:51.463263    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:51.463364    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463364    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463448    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463448    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463529    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:51.463611    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463822    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463822    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.463908    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464069    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464148    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464148    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464226    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464226    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464304    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:51.464403    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:51.464622    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:51.464740    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:51.464817    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:51.464910    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:51.464910    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:51.464988    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:51.464988    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:51.465065    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.465065    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.465144    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:51.465144    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:51.465222    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:51.465222    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465317    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465317    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:51.465417    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:51.465417    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:51.465499    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.465582    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.465582    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465681    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465802    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465874    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:51.465874    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.465960    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466041    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466145    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:51.466229    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:51.466265    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.466293    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.466871    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467142    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.467272    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:51.468693    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:51.469208    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469229    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469229    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:51.469308    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:51.519292    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:51.519292    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:51.721796    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:51.721796    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:51.721796    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.721796    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.721796    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:51.721796    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:51.721796    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.721796    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.721796    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:51 +0000
	I0603 14:51:51.721796    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:51.721796    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:51.721796    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:51.721796    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:51.721796    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:51.721796    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.721796    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.721796    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.721796    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.721796    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.721796    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.721796    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:51.721796    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:51.721796    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.721796    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.722750    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:51.722750    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:51.722750    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.722750    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:51.722750    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:51.722750    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:51.722750    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:51.722750    9752 command_runner.go:130] > Events:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:51.722750    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:51.722750    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.722750    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.722750    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:51.722750    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:51.722750    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:51.722750    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.722750    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:51.722750    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.722750    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:51.722750    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.722750    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:51.722750    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:51.722750    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.722750    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.723801    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:51.723801    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:51.723801    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.723801    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.723801    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.723801    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.723801    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.723801    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.723801    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.723801    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.723961    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.723961    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:51.723961    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.723961    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.723961    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.724046    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.724046    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:51.724046    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:51.724046    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:51.724125    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.724125    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.724125    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:51.724125    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0603 14:51:51.724125    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0603 14:51:51.724125    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.724203    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:51.724203    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:51.724203    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:51.724203    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:51.724281    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:51.724281    9752 command_runner.go:130] > Events:
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:51.724281    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:51.724281    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:51.724376    9752 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:51.724458    9752 command_runner.go:130] >   Normal  NodeNotReady             3m44s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:51.724458    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:51.724458    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:51.724458    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:51.724458    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:51.724458    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:51.724540    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:51.724622    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:51.724622    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:51.724622    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:51.724622    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:51.724622    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:51.724622    9752 command_runner.go:130] > Lease:
	I0603 14:51:51.724732    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:51.724732    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:51.724732    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:51.724732    9752 command_runner.go:130] > Conditions:
	I0603 14:51:51.724732    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:51.724808    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:51.724808    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724808    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:51.724885    9752 command_runner.go:130] > Addresses:
	I0603 14:51:51.724885    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:51.724885    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:51.724885    9752 command_runner.go:130] > Capacity:
	I0603 14:51:51.724885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.724885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.724885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.724885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.724885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.724963    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:51.724963    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:51.724963    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:51.724963    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:51.724963    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:51.724963    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:51.724963    9752 command_runner.go:130] > System Info:
	I0603 14:51:51.724963    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:51.724963    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:51.725038    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:51.725038    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:51.725115    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:51.725115    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:51.725115    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:51.725115    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:51.725115    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:51.725115    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 14:51:51.725115    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 14:51:51.725192    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:51.725192    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:51.725192    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:51.725192    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:51.725279    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:51.725279    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:51.725279    9752 command_runner.go:130] > Events:
	I0603 14:51:51.725279    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:51.725279    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:51.725279    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:51.725399    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:51.725486    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  NodeReady                5m40s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  NodeNotReady             4m4s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:51.725569    9752 command_runner.go:130] >   Normal  RegisteredNode           59s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:51.734742    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:51.734742    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:51.763787    9752 command_runner.go:130] > .:53
	I0603 14:51:51.764082    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:51.764082    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:51.764082    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:51.764159    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:51.764159    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:51.764196    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:51.764196    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:51.764234    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:51.764234    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:51.764275    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:51.764340    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:51.764408    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:51.764408    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:51.764465    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:51.764509    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:51.764563    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:51.764616    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:51.764689    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:51.764747    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:51.764819    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:51.764852    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:51.764906    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:51.764941    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:51.764941    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:51.768025    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:51.768133    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:51.811741    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:51.812148    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:51.812148    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:51.812203    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:51.812203    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:51.812276    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:51.812301    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:51.812349    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:51.812438    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:51.812479    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:51.812479    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:51.812527    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:51.812527    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:51.814151    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:51.814151    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:51.841509    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:51.841509    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:51.841609    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.841742    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:51.841784    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:51.842311    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:51.842311    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:51.842380    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:51.842465    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:51.843025    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:51.843188    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:51.843267    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:51.843301    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:51.843340    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:51.843425    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:51.843467    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:51.843467    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:51.843536    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:51.843536    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:51.843576    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:51.843634    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:51.843934    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:51.843967    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:51.844019    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:51.844019    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:51.844108    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:51.844123    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:51.844827    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:51.844885    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:51.845414    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:51.845766    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:51.845841    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:51.845841    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:51.845884    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:51.845964    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:51.846011    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:51.846011    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:51.846066    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:51.846066    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:51.846122    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:51.846122    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:51.846170    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:51.846170    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:51.846210    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:51.846270    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846270    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:51.846314    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:51.846314    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:51.846363    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:51.846384    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:51.846528    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:51.847060    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:51.847101    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:51.847173    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:51.847173    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:51.847204    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:51.847230    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:51.847258    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:51.847258    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:51.847287    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:51.847287    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:51.847314    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:51.847862    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:51.847862    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:51.847913    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:51.847973    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:51.847973    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:51.848016    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:51.848071    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:51.848144    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:51.848213    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:51.848213    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:51.848266    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:51.848289    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:51.848289    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:51.848316    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:51.848316    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:51.848387    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:51.848387    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:51.848428    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:51.848482    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:51.848522    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:51.848522    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:51.848576    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:51.848576    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:51.848616    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:51.848616    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:51.848701    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.848739    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:51.848790    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:51.848790    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:51.848828    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:51.848828    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:51.848879    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:51.848918    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:51.848967    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849006    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:51.849006    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:51.849143    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849198    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849247    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849284    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:51.849284    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:51.849333    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.849333    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:51.868896    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:51.868896    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:51.901316    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.901906    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.901977    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902050    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902115    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902183    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902183    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:51.902261    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902261    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.902323    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.902402    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.902402    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.902470    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902549    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902549    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902623    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902623    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.902692    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.902781    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.902844    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.902844    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:51.902925    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:51.902983    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.902983    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903047    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:51.903047    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903122    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:51.903122    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:51.903182    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.903182    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:51.903243    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:51.903302    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:51.903302    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:51.903383    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:51.903447    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:51.903508    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:51.903561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:51.903624    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903709    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.903771    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903826    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.903887    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.903950    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:51.904010    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904065    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904065    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904186    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.904244    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.904300    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.904381    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:51.904443    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:51.904503    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:51.904566    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:51.904627    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:51.904719    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:51.904777    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:51.904829    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:51.904829    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:51.904888    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:51.904948    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.905006    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.905006    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:51.905059    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:51.905117    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905176    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905235    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905288    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905288    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905364    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905426    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905484    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.905547    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905606    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905708    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905767    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905819    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905877    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905935    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.905991    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906049    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906106    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906164    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906220    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906272    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906330    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:51.906409    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906468    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906530    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:51.906590    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:51.906669    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:51.906747    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:51.906803    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:51.906882    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.906937    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:51.906998    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:51.907054    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:51.907115    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:51.907115    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:51.907195    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:51.907278    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:51.907278    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:51.907331    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:51.907410    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:51.907465    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:51.907650    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:51.907650    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:51.907715    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:51.907769    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:51.907931    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:51.907931    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:51.908039    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:51.908077    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:51.908121    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:51.908185    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:51.908185    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:51.908261    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:51.908261    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:51.908323    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:51.908402    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:51.908517    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:51.908664    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:51.908734    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.908801    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.908866    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.908938    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.908987    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909049    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:51.909126    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909176    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909176    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909276    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.909317    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:51.909443    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.909476    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910020    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:51.910189    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:51.910729    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:51.910729    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:51.910780    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.910882    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911422    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.911489    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:51.912470    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.913537    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.914116    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:51.942671    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:51.942671    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:51.966055    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:51.966055    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:51.966055    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:51.966647    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:51.966647    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:51.966739    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:51.966798    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:51.966837    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:51.966888    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:51.966888    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:51.968819    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:51.968864    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:51.996037    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:51.996037    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:51.996433    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:51.996433    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:51.999144    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:51.999207    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:52.026749    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.026749    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.026749    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027473    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.027602    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.027670    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.027700    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028285    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.028475    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:52.028565    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.028565    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:52.028629    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.028629    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:52.029163    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.029163    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:52.029295    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:52.029295    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:52.039727    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:52.039727    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:52.069011    9752 command_runner.go:130] > .:53
	I0603 14:51:52.069121    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:52.069121    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:52.069121    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:52.069121    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:52.069401    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:52.069401    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:52.097576    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:52.097576    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:52.098038    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.098038    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.098106    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.098106    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:52.098145    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.098259    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:52.098333    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:52.098333    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:52.099035    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:52.099565    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:52.099565    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:52.099865    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:52.100181    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:52.100376    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:52.101076    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:52.101232    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:52.101765    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:52.101864    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:52.101864    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:52.101910    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:52.102001    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:52.102001    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:52.102056    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:52.102056    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:52.102105    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:52.102105    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:52.102156    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:52.102197    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:52.102426    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:52.102426    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:52.102483    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:52.103011    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:52.103142    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:52.103248    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:52.103290    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:52.103364    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:52.103364    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:52.103423    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:52.103423    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:52.103476    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:52.103476    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:52.103516    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:52.103569    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:52.103569    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:52.103609    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:52.103656    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:52.103656    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:52.103695    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:52.103695    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:52.103748    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:52.103748    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:52.103793    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:52.103793    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:52.103846    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:52.103846    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:52.103887    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:52.103941    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:52.103941    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:52.103981    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:52.104567    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:52.104711    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:52.104711    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:52.104733    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:52.104838    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:52.104883    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:52.104883    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:52.104951    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:52.105035    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:52.105124    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:52.105216    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:52.105299    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:52.105398    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:52.105398    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:52.105442    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:52.105442    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:52.105482    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:52.105545    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:52.105575    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:52.105621    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:52.121042    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:52.121042    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:52.148865    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:52.148865    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149272    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149379    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149525    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149662    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149662    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149707    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:52.149799    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.149799    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:52.149967    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.168486    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.168486    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.168874    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.168874    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.168975    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:52.168975    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.169089    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.171864    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172262    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:52.172325    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172393    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172545    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:52.172706    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172772    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172825    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172825    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:52.172863    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172898    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172898    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172920    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.172947    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173477    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173477    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173538    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:52.173668    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173707    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173775    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173775    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173813    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173863    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173863    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.173900    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.173951    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.173988    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:52.173988    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174032    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174032    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174070    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174121    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:52.174159    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174208    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174208    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:52.174245    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174296    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174373    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:52.174905    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.174905    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:52.174964    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175073    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175128    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175179    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175217    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:52.175267    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175267    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175305    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175381    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175421    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175421    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175465    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175465    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:52.175507    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175507    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175543    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175543    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175575    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:52.175603    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176133    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176133    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176174    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176307    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176307    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176363    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176414    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176454    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176506    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176546    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176599    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176599    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176639    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176705    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176705    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:52.176743    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176794    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176794    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176832    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.176882    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.176920    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:52.176969    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.176969    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:52.177007    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177057    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:52.177094    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177144    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177144    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177182    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177182    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:52.177233    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177271    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177271    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:52.177430    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177497    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177497    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177540    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177591    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177631    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:52.177701    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177740    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177792    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177831    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177902    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.177942    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.177994    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178088    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178129    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178129    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178181    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178258    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178258    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178293    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178397    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178441    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.178463    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:52.178488    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:52.195800    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:52.195800    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:52.264900    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:52.265005    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:52.265005    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:52.265080    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:52.265080    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:52.265080    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:52.265080    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:52.265174    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:52.265174    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:52.265253    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:52.265253    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:52.265357    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:52.265357    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:52.265357    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:52.265463    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:52.268129    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:52.268159    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:52.297077    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:52.297113    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:52.297330    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297330    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:52.297425    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297425    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297508    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:52.297508    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297557    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:52.297557    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:52.297557    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:52.297625    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:52.297656    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:52.297656    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:52.297656    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:52.297712    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297712    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297754    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:52.297754    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297754    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297824    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:52.297864    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:52.297864    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.297916    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.297955    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:52.297955    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298008    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298008    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:52.298008    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:52.298049    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:52.298049    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298102    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298144    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:52.298144    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298144    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298219    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:52.298219    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298260    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298260    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:52.298260    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:52.298309    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:52.298351    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298351    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:52.298351    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:52.298403    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:52.298403    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:52.298651    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:52.298651    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:52.298700    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:52.298755    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:52.298865    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:52.299083    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:52.299161    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:52.299217    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:52.299217    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:52.299370    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:52.299412    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:52.299461    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:52.299549    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:52.299615    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:52.299615    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:52.299703    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:52.299703    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:52.299728    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:52.299728    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:52.299770    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:52.299811    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:52.299872    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:52.306817    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:52.306817    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:52.332623    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:52.333446    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:52.333482    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:52.333526    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:52.334963    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:52.334963    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:52.335221    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:52.335369    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:52.335476    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:52.335476    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:52.335541    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:52.335541    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:52.335604    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:52.335775    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:52.335815    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:52.335869    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:52.335869    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:52.335910    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:52.335950    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:52.335950    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:52.336073    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:52.336101    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:52.346842    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:52.346842    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:52.372692    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:52.373041    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:52.373079    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:52.373169    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:52.373208    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:52.373220    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:52.373262    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:54.892527    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:51:54.902158    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:51:54.902514    9752 round_trippers.go:463] GET https://172.22.154.20:8443/version
	I0603 14:51:54.902514    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:54.902514    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:54.902514    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:54.904079    9752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 14:51:54.904079    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:54.904540    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Content-Length: 263
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:54 GMT
	I0603 14:51:54.904540    9752 round_trippers.go:580]     Audit-Id: 005c12dc-db55-4252-ac7c-42d0ce099d4f
	I0603 14:51:54.904578    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:54.904578    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:54.904578    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:54.904578    9752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0603 14:51:54.904696    9752 api_server.go:141] control plane version: v1.30.1
	I0603 14:51:54.904696    9752 api_server.go:131] duration metric: took 3.7443414s to wait for apiserver health ...
	I0603 14:51:54.904696    9752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 14:51:54.914519    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0603 14:51:54.937851    9752 command_runner.go:130] > 885576ffcadd
	I0603 14:51:54.937851    9752 logs.go:276] 1 containers: [885576ffcadd]
	I0603 14:51:54.947497    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0603 14:51:54.968500    9752 command_runner.go:130] > 480ef64cfa22
	I0603 14:51:54.969516    9752 logs.go:276] 1 containers: [480ef64cfa22]
	I0603 14:51:54.978496    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0603 14:51:54.998504    9752 command_runner.go:130] > f9b260d61dfb
	I0603 14:51:54.999520    9752 command_runner.go:130] > 68e49c3e6dda
	I0603 14:51:54.999520    9752 logs.go:276] 2 containers: [f9b260d61dfb 68e49c3e6dda]
	I0603 14:51:55.007494    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0603 14:51:55.028453    9752 command_runner.go:130] > e2d000674d52
	I0603 14:51:55.028491    9752 command_runner.go:130] > ec3860b2bb3e
	I0603 14:51:55.028491    9752 logs.go:276] 2 containers: [e2d000674d52 ec3860b2bb3e]
	I0603 14:51:55.038409    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0603 14:51:55.063604    9752 command_runner.go:130] > 42926c33070c
	I0603 14:51:55.063684    9752 command_runner.go:130] > 3823f2e2bdb2
	I0603 14:51:55.063752    9752 logs.go:276] 2 containers: [42926c33070c 3823f2e2bdb2]
	I0603 14:51:55.073790    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0603 14:51:55.097161    9752 command_runner.go:130] > f14b3b67d8f2
	I0603 14:51:55.097161    9752 command_runner.go:130] > 63a6ebee2e83
	I0603 14:51:55.097161    9752 logs.go:276] 2 containers: [f14b3b67d8f2 63a6ebee2e83]
	I0603 14:51:55.106155    9752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0603 14:51:55.129204    9752 command_runner.go:130] > 008dec75d90c
	I0603 14:51:55.129204    9752 command_runner.go:130] > ab840a6a9856
	I0603 14:51:55.130305    9752 logs.go:276] 2 containers: [008dec75d90c ab840a6a9856]
	I0603 14:51:55.130505    9752 logs.go:123] Gathering logs for kube-scheduler [ec3860b2bb3e] ...
	I0603 14:51:55.130505    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3860b2bb3e"
	I0603 14:51:55.157343    9752 command_runner.go:130] ! I0603 14:27:13.528076       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.031664       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.031870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.032299       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.032427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.125795       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.125934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.129030       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.132330       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.140068       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.158146    9752 command_runner.go:130] ! I0603 14:27:15.132344       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.148563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! E0603 14:27:15.150706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! W0603 14:27:15.151023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.158146    9752 command_runner.go:130] ! E0603 14:27:15.152765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.158685    9752 command_runner.go:130] ! W0603 14:27:15.154981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158685    9752 command_runner.go:130] ! E0603 14:27:15.155066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.158685    9752 command_runner.go:130] ! W0603 14:27:15.155620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158798    9752 command_runner.go:130] ! E0603 14:27:15.155698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158888    9752 command_runner.go:130] ! W0603 14:27:15.155839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158949    9752 command_runner.go:130] ! E0603 14:27:15.155928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.158973    9752 command_runner.go:130] ! W0603 14:27:15.151535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.158973    9752 command_runner.go:130] ! E0603 14:27:15.156969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.159036    9752 command_runner.go:130] ! W0603 14:27:15.156902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159036    9752 command_runner.go:130] ! E0603 14:27:15.158297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159130    9752 command_runner.go:130] ! W0603 14:27:15.151896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159130    9752 command_runner.go:130] ! E0603 14:27:15.159055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159183    9752 command_runner.go:130] ! W0603 14:27:15.152056       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.159183    9752 command_runner.go:130] ! E0603 14:27:15.159892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.152729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.156318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:15.151779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.160787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.160968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:15.161880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:16.140920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:16.140979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! W0603 14:27:16.241899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159237    9752 command_runner.go:130] ! E0603 14:27:16.242196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! W0603 14:27:16.262469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! E0603 14:27:16.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159822    9752 command_runner.go:130] ! W0603 14:27:16.294257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.159965    9752 command_runner.go:130] ! E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0603 14:51:55.160000    9752 command_runner.go:130] ! W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160196    9752 command_runner.go:130] ! E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160393    9752 command_runner.go:130] ! W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.160393    9752 command_runner.go:130] ! E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160504    9752 command_runner.go:130] ! E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.160580    9752 command_runner.go:130] ! W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:51:55.160580    9752 command_runner.go:130] ! I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.160580    9752 command_runner.go:130] ! E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	I0603 14:51:55.171362    9752 logs.go:123] Gathering logs for kube-controller-manager [63a6ebee2e83] ...
	I0603 14:51:55.171362    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a6ebee2e83"
	I0603 14:51:55.199327    9752 command_runner.go:130] ! I0603 14:27:13.353282       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.803232       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.803270       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.200149    9752 command_runner.go:130] ! I0603 14:27:13.805599       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.806647       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.806911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:13.807149       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.200241    9752 command_runner.go:130] ! I0603 14:27:18.070475       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:55.200357    9752 command_runner.go:130] ! I0603 14:27:18.071643       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:55.200379    9752 command_runner.go:130] ! I0603 14:27:18.088516       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.089260       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.091678       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:55.200405    9752 command_runner.go:130] ! I0603 14:27:18.106231       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:55.201325    9752 command_runner.go:130] ! I0603 14:27:18.107081       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.108455       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.109348       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:55.202191    9752 command_runner.go:130] ! I0603 14:27:18.151033       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.202278    9752 command_runner.go:130] ! I0603 14:27:18.151678       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.202278    9752 command_runner.go:130] ! I0603 14:27:18.154062       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:55.202317    9752 command_runner.go:130] ! I0603 14:27:18.171773       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:55.202317    9752 command_runner.go:130] ! I0603 14:27:18.172224       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:55.202373    9752 command_runner.go:130] ! I0603 14:27:18.174296       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:55.202373    9752 command_runner.go:130] ! I0603 14:27:18.174338       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:55.202411    9752 command_runner.go:130] ! I0603 14:27:18.177788       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.202411    9752 command_runner.go:130] ! I0603 14:27:18.178320       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.202441    9752 command_runner.go:130] ! I0603 14:27:28.218964       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219108       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219379       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.219457       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.240397       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.240536       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.241865       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.252890       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.252986       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.253020       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.253969       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.254003       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.267837       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.268144       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.268510       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.280487       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.280963       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.202474    9752 command_runner.go:130] ! I0603 14:27:28.281100       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:55.203009    9752 command_runner.go:130] ! I0603 14:27:28.330303       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:55.203009    9752 command_runner.go:130] ! I0603 14:27:28.330841       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:55.203110    9752 command_runner.go:130] ! E0603 14:27:28.344040       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:55.203145    9752 command_runner.go:130] ! I0603 14:27:28.344231       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:55.203176    9752 command_runner.go:130] ! I0603 14:27:28.359644       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.360056       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.360090       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.377777       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.378044       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:55.203227    9752 command_runner.go:130] ! I0603 14:27:28.378071       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:55.203350    9752 command_runner.go:130] ! I0603 14:27:28.393317       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:55.203350    9752 command_runner.go:130] ! I0603 14:27:28.393857       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:55.203452    9752 command_runner.go:130] ! I0603 14:27:28.394059       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:55.203452    9752 command_runner.go:130] ! I0603 14:27:28.410446       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:55.203552    9752 command_runner.go:130] ! I0603 14:27:28.411081       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:55.203552    9752 command_runner.go:130] ! I0603 14:27:28.412101       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:55.203634    9752 command_runner.go:130] ! I0603 14:27:28.512629       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:55.203634    9752 command_runner.go:130] ! I0603 14:27:28.513125       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:55.203709    9752 command_runner.go:130] ! I0603 14:27:28.664349       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:55.203709    9752 command_runner.go:130] ! I0603 14:27:28.664428       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:55.203748    9752 command_runner.go:130] ! I0603 14:27:28.664441       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:55.203815    9752 command_runner.go:130] ! I0603 14:27:28.664449       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:55.203815    9752 command_runner.go:130] ! I0603 14:27:28.708054       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:55.203882    9752 command_runner.go:130] ! I0603 14:27:28.708215       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:55.204036    9752 command_runner.go:130] ! I0603 14:27:28.708231       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:55.204036    9752 command_runner.go:130] ! I0603 14:27:28.708444       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:55.204217    9752 command_runner.go:130] ! I0603 14:27:28.708473       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:55.204280    9752 command_runner.go:130] ! I0603 14:27:28.708481       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:55.204375    9752 command_runner.go:130] ! I0603 14:27:28.864634       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:55.204399    9752 command_runner.go:130] ! I0603 14:27:28.864803       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:28.865680       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059529       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059649       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059722       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.059857       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.216054       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.216706       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.217129       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.364837       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.364997       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.365010       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412763       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412820       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412852       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.412870       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.566965       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.567223       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.568152       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.820140       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821302       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821913       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821950       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:55.204473    9752 command_runner.go:130] ! I0603 14:27:29.821977       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:55.205010    9752 command_runner.go:130] ! E0603 14:27:29.857788       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:55.205010    9752 command_runner.go:130] ! I0603 14:27:29.858966       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:55.205056    9752 command_runner.go:130] ! I0603 14:27:30.016833       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:55.205056    9752 command_runner.go:130] ! I0603 14:27:30.016997       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:55.205103    9752 command_runner.go:130] ! I0603 14:27:30.017402       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:55.205126    9752 command_runner.go:130] ! I0603 14:27:30.171847       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:55.205126    9752 command_runner.go:130] ! I0603 14:27:30.172459       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:55.205199    9752 command_runner.go:130] ! I0603 14:27:30.171899       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:55.205227    9752 command_runner.go:130] ! I0603 14:27:30.172588       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:55.205247    9752 command_runner.go:130] ! I0603 14:27:30.313964       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:55.205278    9752 command_runner.go:130] ! I0603 14:27:30.316900       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:55.205306    9752 command_runner.go:130] ! I0603 14:27:30.318749       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.359770       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.359992       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.205331    9752 command_runner.go:130] ! I0603 14:27:30.360405       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205418    9752 command_runner.go:130] ! I0603 14:27:30.361780       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:55.205418    9752 command_runner.go:130] ! I0603 14:27:30.362782       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:55.205478    9752 command_runner.go:130] ! I0603 14:27:30.362463       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:55.205478    9752 command_runner.go:130] ! I0603 14:27:30.363332       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.362554       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.363636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.205518    9752 command_runner.go:130] ! I0603 14:27:30.362564       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362302       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362526       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205585    9752 command_runner.go:130] ! I0603 14:27:30.362586       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.205645    9752 command_runner.go:130] ! I0603 14:27:30.513474       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.205669    9752 command_runner.go:130] ! I0603 14:27:30.513598       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.205713    9752 command_runner.go:130] ! I0603 14:27:30.513645       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:55.205740    9752 command_runner.go:130] ! I0603 14:27:30.663349       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:55.205740    9752 command_runner.go:130] ! I0603 14:27:30.663937       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:55.205829    9752 command_runner.go:130] ! I0603 14:27:30.664013       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:55.205829    9752 command_runner.go:130] ! I0603 14:27:30.965387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965553       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965669       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! W0603 14:27:30.965760       1 shared_informer.go:597] resyncPeriod 16h47m43.189313611s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.965868       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966063       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966351       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! W0603 14:27:30.966376       1 shared_informer.go:597] resyncPeriod 20h4m14.719740563s is smaller than resyncCheckPeriod 20h18m50.945071724s and the informer has already started. Changing it to 20h18m50.945071724s
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966444       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966547       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.966953       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967035       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967556       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967765       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.967951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968043       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968266       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.968373       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:55.205854    9752 command_runner.go:130] ! I0603 14:27:30.969236       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:55.206450    9752 command_runner.go:130] ! I0603 14:27:30.969448       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.206450    9752 command_runner.go:130] ! I0603 14:27:30.969971       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:55.206512    9752 command_runner.go:130] ! I0603 14:27:31.113941       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:55.206512    9752 command_runner.go:130] ! I0603 14:27:31.114128       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:55.206602    9752 command_runner.go:130] ! I0603 14:27:31.114206       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263385       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263850       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.263883       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:55.206637    9752 command_runner.go:130] ! I0603 14:27:31.412784       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:55.206698    9752 command_runner.go:130] ! I0603 14:27:31.412929       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.412960       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563645       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563784       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:55.206722    9752 command_runner.go:130] ! I0603 14:27:31.563863       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.716550       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.717040       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.717246       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:55.206826    9752 command_runner.go:130] ! I0603 14:27:31.727461       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754004       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754224       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.754460       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:55.206904    9752 command_runner.go:130] ! I0603 14:27:31.760470       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:55.207006    9752 command_runner.go:130] ! I0603 14:27:31.761503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.763249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.763617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.764580       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765622       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765811       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765139       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.765067       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.768636       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.770136       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.772665       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.775271       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.782285       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.792874       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.795205       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.809247       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.809495       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.810723       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812015       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812917       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.812992       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.815953       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.816065       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.816884       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.817703       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.817728       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.819607       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820072       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820270       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820477       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820555       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.820081       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.825727       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.832846       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.842133       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.855357       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500" podCIDRs=["10.244.0.0/24"]
	I0603 14:51:55.207033    9752 command_runner.go:130] ! I0603 14:27:31.878271       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:55.207559    9752 command_runner.go:130] ! I0603 14:27:31.913558       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:55.207559    9752 command_runner.go:130] ! I0603 14:27:31.965153       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.028352       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.061268       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.065241       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.069863       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.469591       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.510278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:32.510533       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.110436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="199.281878ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.230475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.89616ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:33.230569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.176449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.004127ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.199426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.643683ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:34.201037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.6µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:43.109227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.101µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:43.154756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="203.6µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:44.622262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.3µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:45.655101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.946906ms"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:45.656447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.098µs"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:27:46.817078       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.207724    9752 command_runner.go:130] ! I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:51:55.208221    9752 command_runner.go:130] ! I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:51:55.208312    9752 command_runner.go:130] ! I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208408    9752 command_runner.go:130] ! I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.208408    9752 command_runner.go:130] ! I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:51:55.208553    9752 command_runner.go:130] ! I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:55.208553    9752 command_runner.go:130] ! I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208638    9752 command_runner.go:130] ! I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:51:55.208702    9752 command_runner.go:130] ! I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.208779    9752 command_runner.go:130] ! I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:51:55.226914    9752 logs.go:123] Gathering logs for kube-apiserver [885576ffcadd] ...
	I0603 14:51:55.226914    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 885576ffcadd"
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.316662       1 options.go:221] external host was not specified, using 172.22.154.20
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.322174       1 server.go:148] Version: v1.30.1
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:36.322276       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.259657    9752 command_runner.go:130] ! I0603 14:50:37.048360       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 14:51:55.259764    9752 command_runner.go:130] ! I0603 14:50:37.061107       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:55.259826    9752 command_runner.go:130] ! I0603 14:50:37.064640       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 14:51:55.259826    9752 command_runner.go:130] ! I0603 14:50:37.064927       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 14:51:55.259893    9752 command_runner.go:130] ! I0603 14:50:37.065980       1 instance.go:299] Using reconciler: lease
	I0603 14:51:55.259924    9752 command_runner.go:130] ! I0603 14:50:37.835903       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0603 14:51:55.259924    9752 command_runner.go:130] ! W0603 14:50:37.835946       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.259924    9752 command_runner.go:130] ! I0603 14:50:38.131228       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0603 14:51:55.259984    9752 command_runner.go:130] ! I0603 14:50:38.131786       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0603 14:51:55.259984    9752 command_runner.go:130] ! I0603 14:50:38.389972       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0603 14:51:55.260007    9752 command_runner.go:130] ! I0603 14:50:38.554749       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0603 14:51:55.260007    9752 command_runner.go:130] ! I0603 14:50:38.569175       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0603 14:51:55.260061    9752 command_runner.go:130] ! W0603 14:50:38.569288       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260084    9752 command_runner.go:130] ! W0603 14:50:38.569316       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260084    9752 command_runner.go:130] ! I0603 14:50:38.570033       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0603 14:51:55.260084    9752 command_runner.go:130] ! W0603 14:50:38.570117       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260152    9752 command_runner.go:130] ! I0603 14:50:38.571568       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0603 14:51:55.260174    9752 command_runner.go:130] ! I0603 14:50:38.572496       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0603 14:51:55.260174    9752 command_runner.go:130] ! W0603 14:50:38.572572       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0603 14:51:55.260174    9752 command_runner.go:130] ! W0603 14:50:38.572581       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0603 14:51:55.260225    9752 command_runner.go:130] ! I0603 14:50:38.574368       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0603 14:51:55.260225    9752 command_runner.go:130] ! W0603 14:50:38.574469       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0603 14:51:55.260247    9752 command_runner.go:130] ! I0603 14:50:38.575393       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0603 14:51:55.260247    9752 command_runner.go:130] ! W0603 14:50:38.575496       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260297    9752 command_runner.go:130] ! W0603 14:50:38.575505       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260297    9752 command_runner.go:130] ! I0603 14:50:38.576166       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0603 14:51:55.260320    9752 command_runner.go:130] ! W0603 14:50:38.576256       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260320    9752 command_runner.go:130] ! W0603 14:50:38.576314       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.577021       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.579498       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.579572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.579581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.580213       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.580317       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.580354       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.581564       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.581613       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.584780       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.585003       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.585204       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.586651       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.586996       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.587142       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.595038       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.595233       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.595389       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.598793       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.602076       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.614489       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.614724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.625009       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.625156       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.625167       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.628702       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.628761       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.628770       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.629748       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.629860       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260370    9752 command_runner.go:130] ! I0603 14:50:38.645169       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0603 14:51:55.260370    9752 command_runner.go:130] ! W0603 14:50:38.645265       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261254       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261440       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261269       1 secure_serving.go:213] Serving securely on [::]:8443
	I0603 14:51:55.260895    9752 command_runner.go:130] ! I0603 14:50:39.261878       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.260971    9752 command_runner.go:130] ! I0603 14:50:39.262067       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0603 14:51:55.260971    9752 command_runner.go:130] ! I0603 14:50:39.265023       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.265458       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.265691       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.266224       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0603 14:51:55.261018    9752 command_runner.go:130] ! I0603 14:50:39.266475       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.266740       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267054       1 aggregator.go:163] waiting for initial CRD sync...
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267429       1 controller.go:116] Starting legacy_token_tracking_controller
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.267943       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0603 14:51:55.261079    9752 command_runner.go:130] ! I0603 14:50:39.268211       1 controller.go:78] Starting OpenAPI AggregationController
	I0603 14:51:55.261143    9752 command_runner.go:130] ! I0603 14:50:39.268471       1 available_controller.go:423] Starting AvailableConditionController
	I0603 14:51:55.261165    9752 command_runner.go:130] ! I0603 14:50:39.268557       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0603 14:51:55.261165    9752 command_runner.go:130] ! I0603 14:50:39.268599       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0603 14:51:55.261190    9752 command_runner.go:130] ! I0603 14:50:39.269220       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0603 14:51:55.261216    9752 command_runner.go:130] ! I0603 14:50:39.284296       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.284599       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.269381       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0603 14:51:55.261242    9752 command_runner.go:130] ! I0603 14:50:39.285184       1 controller.go:139] Starting OpenAPI controller
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285202       1 controller.go:87] Starting OpenAPI V3 controller
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285216       1 naming_controller.go:291] Starting NamingConditionController
	I0603 14:51:55.261281    9752 command_runner.go:130] ! I0603 14:50:39.285225       1 establishing_controller.go:76] Starting EstablishingController
	I0603 14:51:55.261423    9752 command_runner.go:130] ! I0603 14:50:39.285237       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 14:51:55.261525    9752 command_runner.go:130] ! I0603 14:50:39.285244       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.285251       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.285707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0603 14:51:55.261546    9752 command_runner.go:130] ! I0603 14:50:39.307386       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0603 14:51:55.261607    9752 command_runner.go:130] ! I0603 14:50:39.313286       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0603 14:51:55.261607    9752 command_runner.go:130] ! I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:51:55.261632    9752 command_runner.go:130] ! I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:51:55.261688    9752 command_runner.go:130] ! I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:51:55.261730    9752 command_runner.go:130] ! I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:51:55.261730    9752 command_runner.go:130] ! I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:51:55.261818    9752 command_runner.go:130] ! I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:51:55.261842    9752 command_runner.go:130] ! I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:51:55.261896    9752 command_runner.go:130] ! I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:51:55.261919    9752 command_runner.go:130] ! I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:51:55.261973    9752 command_runner.go:130] ! I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:51:55.261973    9752 command_runner.go:130] ! I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 14:51:55.261973    9752 command_runner.go:130] ! W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:51:55.262030    9752 command_runner.go:130] ! I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:51:55.262030    9752 command_runner.go:130] ! I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:51:55.262054    9752 command_runner.go:130] ! I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:51:55.262119    9752 command_runner.go:130] ! I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 14:51:55.268359    9752 logs.go:123] Gathering logs for etcd [480ef64cfa22] ...
	I0603 14:51:55.268359    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480ef64cfa22"
	I0603 14:51:55.293428    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.886507Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.887805Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.22.154.20:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.22.154.20:2380","--initial-cluster=multinode-720500=https://172.22.154.20:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.22.154.20:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.22.154.20:2380","--name=multinode-720500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888235Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:35.88843Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888669Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.22.154.20:2380"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.888851Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.900566Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"]}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.902079Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-720500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.951251Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"47.801744ms"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:35.980047Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.011946Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","commit-index":2070}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=()"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became follower at term 2"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.013301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a5b02d21ad5b31ff [peers: [], term: 2, commit: 2070, applied: 0, lastindex: 2070, lastterm: 2]"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-03T14:50:36.026369Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.034388Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1394}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.043305Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1796}
	I0603 14:51:55.293645    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.052705Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.062682Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a5b02d21ad5b31ff","timeout":"7s"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063103Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a5b02d21ad5b31ff"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.063165Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"a5b02d21ad5b31ff","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06697Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0603 14:51:55.294186    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06815Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0603 14:51:55.294348    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	I0603 14:51:55.294477    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	I0603 14:51:55.294502    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0603 14:51:55.294531    9752 command_runner.go:130] ! {"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	I0603 14:51:55.301018    9752 logs.go:123] Gathering logs for coredns [68e49c3e6dda] ...
	I0603 14:51:55.301018    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68e49c3e6dda"
	I0603 14:51:55.326653    9752 command_runner.go:130] > .:53
	I0603 14:51:55.326723    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:55.326792    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:55.326792    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 127.0.0.1:41900 - 64692 "HINFO IN 6455764258890599449.483474031935060007. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.132764335s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:42222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002636s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:57223 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.096802056s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:36397 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.151408488s
	I0603 14:51:55.326827    9752 command_runner.go:130] > [INFO] 10.244.1.2:59107 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.364951305s
	I0603 14:51:55.326900    9752 command_runner.go:130] > [INFO] 10.244.0.3:53007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004329s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:41844 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001542s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:33279 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.0.3:34469 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0001054s
	I0603 14:51:55.326921    9752 command_runner.go:130] > [INFO] 10.244.1.2:33917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001325s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:49000 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.025227215s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:40535 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002926s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:57809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001012s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:43376 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024865416s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:51758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003251s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:42717 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:52073 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001596s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001382s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	I0603 14:51:55.327002    9752 command_runner.go:130] > [INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	I0603 14:51:55.327522    9752 command_runner.go:130] > [INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0603 14:51:55.327585    9752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0603 14:51:55.330284    9752 logs.go:123] Gathering logs for kindnet [008dec75d90c] ...
	I0603 14:51:55.330284    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 008dec75d90c"
	I0603 14:51:55.360500    9752 command_runner.go:130] ! I0603 14:50:42.082079       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 14:51:55.360500    9752 command_runner.go:130] ! I0603 14:50:42.082943       1 main.go:107] hostIP = 172.22.154.20
	I0603 14:51:55.360500    9752 command_runner.go:130] ! podIP = 172.22.154.20
	I0603 14:51:55.360596    9752 command_runner.go:130] ! I0603 14:50:42.083380       1 main.go:116] setting mtu 1500 for CNI 
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:50:42.083413       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:50:42.083683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 14:51:55.360617    9752 command_runner.go:130] ! I0603 14:51:12.571541       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0603 14:51:55.360683    9752 command_runner.go:130] ! I0603 14:51:12.651275       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360683    9752 command_runner.go:130] ! I0603 14:51:12.651428       1 main.go:227] handling current node
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652437       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652687       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360708    9752 command_runner.go:130] ! I0603 14:51:12.652926       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.22.146.196 Flags: [] Table: 0} 
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653574       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653674       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360774    9752 command_runner.go:130] ! I0603 14:51:12.653740       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664648       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664694       1 main.go:227] handling current node
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664708       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360854    9752 command_runner.go:130] ! I0603 14:51:22.664715       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:22.664826       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:22.665507       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678392       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678477       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.678492       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679315       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679578       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:32.679593       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686747       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686840       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686854       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.686861       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.687305       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:42.687446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707609       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707654       1 main.go:227] handling current node
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707666       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.707672       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.708072       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.360943    9752 command_runner.go:130] ! I0603 14:51:52.708115       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.363849    9752 logs.go:123] Gathering logs for container status ...
	I0603 14:51:55.363849    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 14:51:55.434511    9752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0603 14:51:55.434584    9752 command_runner.go:130] > f9b260d61dfbd       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:55.434584    9752 command_runner.go:130] > 291b656660b4b       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	I0603 14:51:55.434584    9752 command_runner.go:130] > c81abdbb29c7c       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:55.434584    9752 command_runner.go:130] > 008dec75d90c7       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	I0603 14:51:55.434584    9752 command_runner.go:130] > 2061be0913b2b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	I0603 14:51:55.434584    9752 command_runner.go:130] > 42926c33070ce       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	I0603 14:51:55.434584    9752 command_runner.go:130] > 885576ffcadd7       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > 480ef64cfa226       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > f14b3b67d8f28       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > e2d000674d525       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	I0603 14:51:55.434584    9752 command_runner.go:130] > 68e49c3e6ddaa       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	I0603 14:51:55.434584    9752 command_runner.go:130] > ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	I0603 14:51:55.434584    9752 command_runner.go:130] > 3823f2e2bdb28       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	I0603 14:51:55.434584    9752 command_runner.go:130] > 63a6ebee2e836       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	I0603 14:51:55.434584    9752 command_runner.go:130] > ec3860b2bb3ef       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	I0603 14:51:55.437050    9752 logs.go:123] Gathering logs for kindnet [ab840a6a9856] ...
	I0603 14:51:55.437050    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab840a6a9856"
	I0603 14:51:55.464262    9752 command_runner.go:130] ! I0603 14:37:02.418496       1 main.go:227] handling current node
	I0603 14:51:55.464262    9752 command_runner.go:130] ! I0603 14:37:02.418509       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.418514       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.419057       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.464851    9752 command_runner.go:130] ! I0603 14:37:02.419146       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.464976    9752 command_runner.go:130] ! I0603 14:37:12.433874       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.464976    9752 command_runner.go:130] ! I0603 14:37:12.433964       1 main.go:227] handling current node
	I0603 14:51:55.465288    9752 command_runner.go:130] ! I0603 14:37:12.433979       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.465288    9752 command_runner.go:130] ! I0603 14:37:12.433987       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:12.434708       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:12.434812       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.466023    9752 command_runner.go:130] ! I0603 14:37:22.441734       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.466353    9752 command_runner.go:130] ! I0603 14:37:22.443317       1 main.go:227] handling current node
	I0603 14:51:55.466353    9752 command_runner.go:130] ! I0603 14:37:22.443366       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.467344    9752 command_runner.go:130] ! I0603 14:37:22.443394       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.467748    9752 command_runner.go:130] ! I0603 14:37:22.443536       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469431    9752 command_runner.go:130] ! I0603 14:37:22.443544       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469507    9752 command_runner.go:130] ! I0603 14:37:32.458669       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458715       1 main.go:227] handling current node
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458746       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.458759       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.459272       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469577    9752 command_runner.go:130] ! I0603 14:37:32.459313       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.465893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.466039       1 main.go:227] handling current node
	I0603 14:51:55.469640    9752 command_runner.go:130] ! I0603 14:37:42.466054       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469704    9752 command_runner.go:130] ! I0603 14:37:42.466062       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469704    9752 command_runner.go:130] ! I0603 14:37:42.466530       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:42.466713       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484160       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484343       1 main.go:227] handling current node
	I0603 14:51:55.469734    9752 command_runner.go:130] ! I0603 14:37:52.484358       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.484366       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.484918       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469808    9752 command_runner.go:130] ! I0603 14:37:52.485003       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469874    9752 command_runner.go:130] ! I0603 14:38:02.499379       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469899    9752 command_runner.go:130] ! I0603 14:38:02.500157       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500459       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500600       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.500943       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:02.501037       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510568       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510676       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510691       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.510699       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.511065       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:12.511143       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523564       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523667       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523681       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.523690       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.524005       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:22.524127       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.531830       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532127       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532312       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532328       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532640       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:32.532677       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.545963       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546065       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546080       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546088       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546348       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:42.546488       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559438       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559480       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559491       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559497       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.559891       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:38:52.560039       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.565901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.566044       1 main.go:227] handling current node
	I0603 14:51:55.469927    9752 command_runner.go:130] ! I0603 14:39:02.566059       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470458    9752 command_runner.go:130] ! I0603 14:39:02.566066       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470458    9752 command_runner.go:130] ! I0603 14:39:02.566452       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:02.566542       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.580562       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.580900       1 main.go:227] handling current node
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581000       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581036       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470504    9752 command_runner.go:130] ! I0603 14:39:12.581299       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:12.581368       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589560       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589667       1 main.go:227] handling current node
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589684       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.589692       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470606    9752 command_runner.go:130] ! I0603 14:39:22.590588       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470676    9752 command_runner.go:130] ! I0603 14:39:22.590765       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597518       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597534       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597541       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.597952       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:32.598225       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.608987       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609016       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609075       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609129       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609601       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:42.609617       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622153       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622304       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622322       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622329       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.622994       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:39:52.623087       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643681       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643725       1 main.go:227] handling current node
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643738       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.643744       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.644288       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:02.644378       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:12.652030       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.470729    9752 command_runner.go:130] ! I0603 14:40:12.652123       1 main.go:227] handling current node
	I0603 14:51:55.471328    9752 command_runner.go:130] ! I0603 14:40:12.652138       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.471429    9752 command_runner.go:130] ! I0603 14:40:12.652145       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472310    9752 command_runner.go:130] ! I0603 14:40:12.652402       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:12.652480       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.661893       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.661999       1 main.go:227] handling current node
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662015       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662023       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662623       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472750    9752 command_runner.go:130] ! I0603 14:40:22.662711       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472874    9752 command_runner.go:130] ! I0603 14:40:32.676552       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676654       1 main.go:227] handling current node
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676669       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.472904    9752 command_runner.go:130] ! I0603 14:40:32.676677       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:32.676798       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:32.676829       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.472980    9752 command_runner.go:130] ! I0603 14:40:42.690358       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473057    9752 command_runner.go:130] ! I0603 14:40:42.690463       1 main.go:227] handling current node
	I0603 14:51:55.473080    9752 command_runner.go:130] ! I0603 14:40:42.690478       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473080    9752 command_runner.go:130] ! I0603 14:40:42.690485       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:42.691131       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:42.691265       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704086       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704406       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704615       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.704801       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.705555       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:40:52.705594       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.714922       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715404       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715629       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715697       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.715836       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:02.717286       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733829       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733940       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733954       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.733962       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.734767       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:12.734861       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747461       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747575       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747589       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.747596       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.748388       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:22.748478       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755048       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755098       1 main.go:227] handling current node
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755111       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755118       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755281       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:32.755297       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473109    9752 command_runner.go:130] ! I0603 14:41:42.769640       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769732       1 main.go:227] handling current node
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769748       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473644    9752 command_runner.go:130] ! I0603 14:41:42.769756       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473689    9752 command_runner.go:130] ! I0603 14:41:42.769900       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:42.769930       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777787       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777885       1 main.go:227] handling current node
	I0603 14:51:55.473740    9752 command_runner.go:130] ! I0603 14:41:52.777901       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.777909       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.778034       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:41:52.778047       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473803    9752 command_runner.go:130] ! I0603 14:42:02.796158       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473886    9752 command_runner.go:130] ! I0603 14:42:02.796336       1 main.go:227] handling current node
	I0603 14:51:55.473908    9752 command_runner.go:130] ! I0603 14:42:02.796352       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796361       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796675       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:02.796693       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.804901       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.805658       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.805981       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.806077       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.808338       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:12.808446       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822735       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822779       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822792       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.822798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.823041       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:22.823056       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829730       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829780       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829793       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.829798       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.830081       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:32.830157       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.843959       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844251       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844269       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844278       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844481       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:42.844489       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970825       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970941       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970957       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.970965       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.971359       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:42:52.971390       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985233       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985707       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985801       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.985813       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.986087       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:02.986213       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001792       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001903       1 main.go:227] handling current node
	I0603 14:51:55.473935    9752 command_runner.go:130] ! I0603 14:43:13.001919       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.001926       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.002409       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474472    9752 command_runner.go:130] ! I0603 14:43:13.002546       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014350       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014430       1 main.go:227] handling current node
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014443       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014466       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.014973       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474552    9752 command_runner.go:130] ! I0603 14:43:23.015050       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028486       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028618       1 main.go:227] handling current node
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028632       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028639       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474635    9752 command_runner.go:130] ! I0603 14:43:33.028797       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:33.029137       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.042807       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.042971       1 main.go:227] handling current node
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.043055       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474709    9752 command_runner.go:130] ! I0603 14:43:43.043063       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474830    9752 command_runner.go:130] ! I0603 14:43:43.043998       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474854    9752 command_runner.go:130] ! I0603 14:43:43.044018       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.060985       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061106       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061142       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061153       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061441       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:43:53.061530       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.074882       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075006       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075023       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075031       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075251       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:03.075287       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082515       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082634       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082649       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.082657       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.083854       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:13.084020       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096516       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096561       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096574       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.096585       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.098310       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:23.098383       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105034       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105146       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105211       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105354       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:33.105362       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115437       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115557       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115572       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.115580       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.116248       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:43.116325       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129841       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129952       1 main.go:227] handling current node
	I0603 14:51:55.474882    9752 command_runner.go:130] ! I0603 14:44:53.129967       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.129992       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.130474       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:44:53.130513       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475435    9752 command_runner.go:130] ! I0603 14:45:03.145387       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145506       1 main.go:227] handling current node
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475501    9752 command_runner.go:130] ! I0603 14:45:03.145529       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:03.145991       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:03.146104       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:13.154208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475552    9752 command_runner.go:130] ! I0603 14:45:13.154303       1 main.go:227] handling current node
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154318       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154325       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154444       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475613    9752 command_runner.go:130] ! I0603 14:45:13.154751       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475678    9752 command_runner.go:130] ! I0603 14:45:23.167023       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475678    9752 command_runner.go:130] ! I0603 14:45:23.167139       1 main.go:227] handling current node
	I0603 14:51:55.475703    9752 command_runner.go:130] ! I0603 14:45:23.167156       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167204       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167490       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:23.167675       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182518       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182565       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182579       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.182586       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.183095       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:33.183227       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191204       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191291       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191307       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191316       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191713       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:43.191805       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200890       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200927       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.200936       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.201688       1 main.go:223] Handling node with IPs: map[172.22.145.66:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:45:53.201766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.2.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207719       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207807       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207821       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:03.207828       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222386       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222505       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222522       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.222530       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223020       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223269       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:13.223648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.22.151.134 Flags: [] Table: 0} 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237715       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237767       1 main.go:227] handling current node
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237797       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237803       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.237989       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:23.238008       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:33.244795       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.475731    9752 command_runner.go:130] ! I0603 14:46:33.244940       1 main.go:227] handling current node
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.244960       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.244971       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476258    9752 command_runner.go:130] ! I0603 14:46:33.245647       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:33.245764       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.261658       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262286       1 main.go:227] handling current node
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262368       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262496       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476302    9752 command_runner.go:130] ! I0603 14:46:43.262847       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:43.262938       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.275414       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.275880       1 main.go:227] handling current node
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.276199       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476400    9752 command_runner.go:130] ! I0603 14:46:53.276372       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476479    9752 command_runner.go:130] ! I0603 14:46:53.276690       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476503    9752 command_runner.go:130] ! I0603 14:46:53.276766       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.282970       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283067       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283157       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283220       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283747       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:03.283832       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289208       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289296       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289311       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.289321       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.290501       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:13.290610       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305390       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305479       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305494       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.305501       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.306027       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.476532    9752 command_runner.go:130] ! I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.477068    9752 command_runner.go:130] ! I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.477116    9752 command_runner.go:130] ! I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:51:55.477116    9752 command_runner.go:130] ! I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.477170    9752 command_runner.go:130] ! I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.477170    9752 command_runner.go:130] ! I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.477196    9752 command_runner.go:130] ! I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.477196    9752 command_runner.go:130] ! I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:51:55.477234    9752 command_runner.go:130] ! I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:51:55.494348    9752 logs.go:123] Gathering logs for dmesg ...
	I0603 14:51:55.494348    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.128622] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.023991] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000006] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.059620] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.020549] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0603 14:51:55.519414    9752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0603 14:51:55.519414    9752 command_runner.go:130] > [ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	I0603 14:51:55.519414    9752 command_runner.go:130] > [  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	I0603 14:51:55.521353    9752 logs.go:123] Gathering logs for describe nodes ...
	I0603 14:51:55.521353    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 14:51:55.733540    9752 command_runner.go:130] > Name:               multinode-720500
	I0603 14:51:55.733660    9752 command_runner.go:130] > Roles:              control-plane
	I0603 14:51:55.733660    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.733660    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.733660    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.733806    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500
	I0603 14:51:55.733806    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.733834    9752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0603 14:51:55.733885    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.733885    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.733885    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	I0603 14:51:55.733885    9752 command_runner.go:130] > Taints:             <none>
	I0603 14:51:55.733885    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.733885    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500
	I0603 14:51:55.733885    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.733885    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:51:51 +0000
	I0603 14:51:55.733885    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0603 14:51:55.733885    9752 command_runner.go:130] >   MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0603 14:51:55.733885    9752 command_runner.go:130] >   DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0603 14:51:55.733885    9752 command_runner.go:130] >   PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	I0603 14:51:55.733885    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   InternalIP:  172.22.154.20
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Hostname:    multinode-720500
	I0603 14:51:55.733885    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.733885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.733885    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.733885    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.733885    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.733885    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Machine ID:                 d1c31924319744c587cc3327e70686c4
	I0603 14:51:55.733885    9752 command_runner.go:130] >   System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.733885    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.733885    9752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0603 14:51:55.733885    9752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0603 14:51:55.733885    9752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0603 14:51:55.733885    9752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.733885    9752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.734418    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-n2t5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 etcd-multinode-720500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0603 14:51:55.734462    9752 command_runner.go:130] >   kube-system                 kindnet-26s27                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0603 14:51:55.734521    9752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-720500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0603 14:51:55.734550    9752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-720500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:55.734550    9752 command_runner.go:130] >   kube-system                 kube-proxy-64l9x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:55.734610    9752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-720500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:55.734634    9752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0603 14:51:55.734634    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.734634    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.734634    9752 command_runner.go:130] >   Resource           Requests     Limits
	I0603 14:51:55.734634    9752 command_runner.go:130] >   --------           --------     ------
	I0603 14:51:55.734690    9752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:55.734690    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0603 14:51:55.734690    9752 command_runner.go:130] > Events:
	I0603 14:51:55.734690    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:55.734752    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:55.734752    9752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0603 14:51:55.734752    9752 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0603 14:51:55.734813    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:55.734837    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.734837    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.734890    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.734915    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.734972    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.734972    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:55.734996    9752 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-720500 status is now: NodeReady
	I0603 14:51:55.735054    9752 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0603 14:51:55.735054    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.735118    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	I0603 14:51:55.735118    9752 command_runner.go:130] > Name:               multinode-720500-m02
	I0603 14:51:55.735186    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:55.735186    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.735186    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m02
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:55.735246    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.735313    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.735313    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.735313    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	I0603 14:51:55.735399    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:55.735399    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:55.735456    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.735456    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.735456    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m02
	I0603 14:51:55.735485    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.735485    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	I0603 14:51:55.735485    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:55.735485    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.735485    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   InternalIP:  172.22.146.196
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Hostname:    multinode-720500-m02
	I0603 14:51:55.735485    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.735485    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.735485    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.735485    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	I0603 14:51:55.735485    9752 command_runner.go:130] >   System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.735485    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.735485    9752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0603 14:51:55.735485    9752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0603 14:51:55.735485    9752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.735485    9752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.735485    9752 command_runner.go:130] >   default                     busybox-fc5497c4f-mjhcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0603 14:51:55.735485    9752 command_runner.go:130] >   kube-system                 kindnet-fmfz2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0603 14:51:55.735485    9752 command_runner.go:130] >   kube-system                 kube-proxy-sm9rr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0603 14:51:55.735485    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.735485    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.735485    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:55.735485    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:55.735485    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:55.736011    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:55.736011    9752 command_runner.go:130] > Events:
	I0603 14:51:55.736011    9752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0603 14:51:55.736056    9752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  NodeNotReady             3m48s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	I0603 14:51:55.736089    9752 command_runner.go:130] > Name:               multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] > Roles:              <none>
	I0603 14:51:55.736089    9752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-720500
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0603 14:51:55.736089    9752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0603 14:51:55.736089    9752 command_runner.go:130] > CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	I0603 14:51:55.736089    9752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0603 14:51:55.736089    9752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0603 14:51:55.736089    9752 command_runner.go:130] > Unschedulable:      false
	I0603 14:51:55.736089    9752 command_runner.go:130] > Lease:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   HolderIdentity:  multinode-720500-m03
	I0603 14:51:55.736089    9752 command_runner.go:130] >   AcquireTime:     <unset>
	I0603 14:51:55.736089    9752 command_runner.go:130] >   RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	I0603 14:51:55.736089    9752 command_runner.go:130] > Conditions:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0603 14:51:55.736089    9752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0603 14:51:55.736089    9752 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] >   Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0603 14:51:55.736089    9752 command_runner.go:130] > Addresses:
	I0603 14:51:55.736089    9752 command_runner.go:130] >   InternalIP:  172.22.151.134
	I0603 14:51:55.736630    9752 command_runner.go:130] >   Hostname:    multinode-720500-m03
	I0603 14:51:55.736630    9752 command_runner.go:130] > Capacity:
	I0603 14:51:55.736630    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.736691    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.736691    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.736691    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.736691    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.736748    9752 command_runner.go:130] > Allocatable:
	I0603 14:51:55.736748    9752 command_runner.go:130] >   cpu:                2
	I0603 14:51:55.736748    9752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0603 14:51:55.736811    9752 command_runner.go:130] >   hugepages-2Mi:      0
	I0603 14:51:55.736811    9752 command_runner.go:130] >   memory:             2164264Ki
	I0603 14:51:55.736834    9752 command_runner.go:130] >   pods:               110
	I0603 14:51:55.736834    9752 command_runner.go:130] > System Info:
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	I0603 14:51:55.736834    9752 command_runner.go:130] >   System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0603 14:51:55.736834    9752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0603 14:51:55.736834    9752 command_runner.go:130] >   Operating System:           linux
	I0603 14:51:55.736933    9752 command_runner.go:130] >   Architecture:               amd64
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0603 14:51:55.736951    9752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0603 14:51:55.736951    9752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0603 14:51:55.736951    9752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0603 14:51:55.736951    9752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0603 14:51:55.737039    9752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0603 14:51:55.737065    9752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0603 14:51:55.737091    9752 command_runner.go:130] >   kube-system                 kindnet-h58hc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0603 14:51:55.737091    9752 command_runner.go:130] >   kube-system                 kube-proxy-ctm5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0603 14:51:55.737091    9752 command_runner.go:130] > Allocated resources:
	I0603 14:51:55.737121    9752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0603 14:51:55.737148    9752 command_runner.go:130] >   Resource           Requests   Limits
	I0603 14:51:55.737194    9752 command_runner.go:130] >   --------           --------   ------
	I0603 14:51:55.737217    9752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0603 14:51:55.737236    9752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0603 14:51:55.737285    9752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:55.737285    9752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0603 14:51:55.737319    9752 command_runner.go:130] > Events:
	I0603 14:51:55.737319    9752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0603 14:51:55.737341    9752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  Starting                 5m47s                  kube-proxy       
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeReady                5m44s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  NodeNotReady             4m8s                   node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	I0603 14:51:55.737341    9752 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	I0603 14:51:55.746835    9752 logs.go:123] Gathering logs for coredns [f9b260d61dfb] ...
	I0603 14:51:55.746835    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9b260d61dfb"
	I0603 14:51:55.774878    9752 command_runner.go:130] > .:53
	I0603 14:51:55.774956    9752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	I0603 14:51:55.774956    9752 command_runner.go:130] > CoreDNS-1.11.1
	I0603 14:51:55.774956    9752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0603 14:51:55.774956    9752 command_runner.go:130] > [INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	I0603 14:51:55.774956    9752 logs.go:123] Gathering logs for kube-scheduler [e2d000674d52] ...
	I0603 14:51:55.774956    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2d000674d52"
	I0603 14:51:55.798606    9752 command_runner.go:130] ! I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.799927    9752 command_runner.go:130] ! W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0603 14:51:55.799927    9752 command_runner.go:130] ! W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 14:51:55.800013    9752 command_runner.go:130] ! W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0603 14:51:55.800013    9752 command_runner.go:130] ! W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:51:55.800108    9752 command_runner.go:130] ! I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.800183    9752 command_runner.go:130] ! I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.800183    9752 command_runner.go:130] ! I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:51:55.802826    9752 logs.go:123] Gathering logs for kube-controller-manager [f14b3b67d8f2] ...
	I0603 14:51:55.802878    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14b3b67d8f2"
	I0603 14:51:55.838656    9752 command_runner.go:130] ! I0603 14:50:37.132219       1 serving.go:380] Generated self-signed cert in-memory
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.965887       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.966244       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.839193    9752 command_runner.go:130] ! I0603 14:50:37.969206       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.969593       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.970401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:37.970711       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:51:55.839273    9752 command_runner.go:130] ! I0603 14:50:41.339512       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0603 14:51:55.839342    9752 command_runner.go:130] ! I0603 14:50:41.341523       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0603 14:51:55.839342    9752 command_runner.go:130] ! E0603 14:50:41.352670       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0603 14:51:55.839342    9752 command_runner.go:130] ! I0603 14:50:41.352747       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0603 14:51:55.839397    9752 command_runner.go:130] ! I0603 14:50:41.352812       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0603 14:51:55.839651    9752 command_runner.go:130] ! I0603 14:50:41.408502       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 14:51:55.839651    9752 command_runner.go:130] ! I0603 14:50:41.409411       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.409645       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.419223       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.421972       1 shared_informer.go:313] Waiting for caches to sync for job
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.422044       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0603 14:51:55.840545    9752 command_runner.go:130] ! I0603 14:50:41.427251       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0603 14:51:55.840663    9752 command_runner.go:130] ! I0603 14:50:41.427473       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0603 14:51:55.840663    9752 command_runner.go:130] ! I0603 14:50:41.427485       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.433520       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.433884       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.442828       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.442944       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443317       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443408       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.443456       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.444287       1 shared_informer.go:320] Caches are synced for tokens
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.448688       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.448996       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.449010       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.471390       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.478411       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.478486       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496707       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496851       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.496864       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.512398       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.512785       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.514642       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.526995       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.528483       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.528503       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560312       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560410       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! I0603 14:50:41.560606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0603 14:51:55.840722    9752 command_runner.go:130] ! W0603 14:50:41.560637       1 shared_informer.go:597] resyncPeriod 13h36m9.576172414s is smaller than resyncCheckPeriod 18h19m8.512720564s and the informer has already started. Changing it to 18h19m8.512720564s
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.560790       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.560834       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0603 14:51:55.841317    9752 command_runner.go:130] ! I0603 14:50:41.561009       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0603 14:51:55.841396    9752 command_runner.go:130] ! I0603 14:50:41.562817       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0603 14:51:55.841396    9752 command_runner.go:130] ! I0603 14:50:41.562891       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.562939       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.562993       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0603 14:51:55.841468    9752 command_runner.go:130] ! I0603 14:50:41.563015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0603 14:51:55.841545    9752 command_runner.go:130] ! I0603 14:50:41.563032       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0603 14:51:55.841571    9752 command_runner.go:130] ! I0603 14:50:41.563098       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0603 14:51:55.841571    9752 command_runner.go:130] ! I0603 14:50:41.564183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0603 14:51:55.841617    9752 command_runner.go:130] ! I0603 14:50:41.564221       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0603 14:51:55.841661    9752 command_runner.go:130] ! I0603 14:50:41.564392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0603 14:51:55.841661    9752 command_runner.go:130] ! I0603 14:50:41.564485       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564524       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564636       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0603 14:51:55.841703    9752 command_runner.go:130] ! I0603 14:50:41.564663       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.564687       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.565005       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0603 14:51:55.841765    9752 command_runner.go:130] ! I0603 14:50:41.565020       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.841852    9752 command_runner.go:130] ! I0603 14:50:41.565041       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0603 14:51:55.841879    9752 command_runner.go:130] ! I0603 14:50:41.581314       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 14:51:55.841936    9752 command_runner.go:130] ! I0603 14:50:41.587130       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 14:51:55.841968    9752 command_runner.go:130] ! I0603 14:50:41.587228       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 14:51:55.841968    9752 command_runner.go:130] ! I0603 14:50:41.587968       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594087       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594455       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.594469       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597147       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597498       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.597530       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607190       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607598       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.607632       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.610674       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.610909       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.611242       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614142       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614447       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.614483       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635724       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635913       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.635952       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.636091       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640219       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640668       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.640872       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.653671       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.654023       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.654058       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667205       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667229       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.667236       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.669727       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.669883       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.726233       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.726660       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729282       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729661       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.729876       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.736485       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.737260       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0603 14:51:55.841996    9752 command_runner.go:130] ! E0603 14:50:41.740502       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.740814       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.740933       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.741056       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.750961       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.751223       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.751477       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792608       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792759       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.792773       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844612       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844676       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.844688       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 14:51:55.841996    9752 command_runner.go:130] ! I0603 14:50:41.896427       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0603 14:51:55.842896    9752 command_runner.go:130] ! I0603 14:50:41.896537       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.896561       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.896589       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.942852       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.943245       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.943758       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0603 14:51:55.842945    9752 command_runner.go:130] ! I0603 14:50:41.993465       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:41.993559       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:41.993571       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:42.042940       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0603 14:51:55.843068    9752 command_runner.go:130] ! I0603 14:50:42.043287       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:42.043532       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:42.043637       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0603 14:51:55.843137    9752 command_runner.go:130] ! I0603 14:50:52.110253       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0603 14:51:55.843194    9752 command_runner.go:130] ! I0603 14:50:52.110544       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.110823       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.111251       1 shared_informer.go:313] Waiting for caches to sync for node
	I0603 14:51:55.843218    9752 command_runner.go:130] ! I0603 14:50:52.114516       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.114754       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.114859       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.115420       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.120172       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 14:51:55.843289    9752 command_runner.go:130] ! I0603 14:50:52.120726       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.120900       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.130702       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.132004       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0603 14:51:55.843378    9752 command_runner.go:130] ! I0603 14:50:52.132310       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0603 14:51:55.843439    9752 command_runner.go:130] ! I0603 14:50:52.135969       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0603 14:51:55.843439    9752 command_runner.go:130] ! I0603 14:50:52.136243       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.843464    9752 command_runner.go:130] ! I0603 14:50:52.136643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137507       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137603       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137643       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.137983       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138267       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138302       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138609       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138713       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138746       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.138986       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143612       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143872       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.143971       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.153209       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.172692       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.193739       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.202204       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500\" does not exist"
	I0603 14:51:55.843492    9752 command_runner.go:130] ! I0603 14:50:52.202247       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:51:55.844088    9752 command_runner.go:130] ! I0603 14:50:52.202568       1 shared_informer.go:320] Caches are synced for TTL
	I0603 14:51:55.844088    9752 command_runner.go:130] ! I0603 14:50:52.202880       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.206448       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.209857       1 shared_informer.go:320] Caches are synced for expand
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.210173       1 shared_informer.go:320] Caches are synced for namespace
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.211842       1 shared_informer.go:320] Caches are synced for node
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.213573       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0603 14:51:55.844177    9752 command_runner.go:130] ! I0603 14:50:52.213786       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0603 14:51:55.844259    9752 command_runner.go:130] ! I0603 14:50:52.213951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.214197       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.227537       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.228829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.230275       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.233623       1 shared_informer.go:320] Caches are synced for HPA
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.237260       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238266       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238408       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.238593       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.239064       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.242643       1 shared_informer.go:320] Caches are synced for daemon sets
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.243734       1 shared_informer.go:320] Caches are synced for taint
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.243982       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.246907       1 shared_informer.go:320] Caches are synced for PVC protection
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.248798       1 shared_informer.go:320] Caches are synced for GC
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.249570       1 shared_informer.go:320] Caches are synced for service account
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.252842       1 shared_informer.go:320] Caches are synced for PV protection
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.254214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278584       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278573       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278738       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.278760       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.279382       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.288184       1 shared_informer.go:320] Caches are synced for disruption
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.293854       1 shared_informer.go:320] Caches are synced for deployment
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.294911       1 shared_informer.go:320] Caches are synced for stateful set
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.297844       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.297906       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.303945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.988424ms"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.304988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.899µs"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:51:55.844288    9752 command_runner.go:130] ! I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:51:55.844820    9752 command_runner.go:130] ! I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:55.844897    9752 command_runner.go:130] ! I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:51:55.862777    9752 logs.go:123] Gathering logs for kubelet ...
	I0603 14:51:55.863779    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461169    1389 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.461675    1389 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: I0603 14:50:30.463263    1389 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 kubelet[1389]: E0603 14:50:30.464581    1389 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:30 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183733    1442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.183842    1442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: I0603 14:50:31.187119    1442 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 kubelet[1442]: E0603 14:50:31.187481    1442 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:31 multinode-720500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.822960    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823030    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.823310    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.825110    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.838917    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864578    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.864681    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865871    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.865955    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-720500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867023    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.867065    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.868032    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872473    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872570    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.872603    1525 kubelet.go:312] "Adding apiserver pod source"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.874552    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.878535    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.878646    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.893763    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.881181    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.881366    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.883254    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.884826    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.885850    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.886975    1525 server.go:1264] "Started kubelet"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.895136    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899089    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.899110    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.901004    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.902811    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.905416    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.915751    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.921759    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.948843    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.955483    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="200ms"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.955934    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.956139    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956405    1525 factory.go:221] Registration of the systemd container factory successfully
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956512    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956608    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.956737    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958873    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.958985    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: I0603 14:50:33.959014    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.959250    1525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.983497    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: W0603 14:50:33.993696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:33 multinode-720500 kubelet[1525]: E0603 14:50:33.993829    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023526    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023565    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.023586    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024426    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024488    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.024529    1525 policy_none.go:49] "None policy: Start"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.028955    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.030495    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035699    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.035745    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0603 14:51:55.894772    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.036656    1525 state_mem.go:75] "Updated machine memory state"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.041946    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.042384    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.043501    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.049031    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-720500\" not found"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.060498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="f58e384885de6f2352fb028e836ba47f" podNamespace="kube-system" podName="kube-scheduler-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.061562    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a9aa17bec6c8b90196f8771e2e5c6391" podNamespace="kube-system" podName="kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.062289    1525 topology_manager.go:215] "Topology Admit Handler" podUID="78d1bd07ad8cdd8611c0b5d7e797ef30" podNamespace="kube-system" podName="kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.063858    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7a9c45e53018cd74c5a13ccfd96f1479" podNamespace="kube-system" podName="etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.065312    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b548c7f105007ea217eb3af0981a11ac9ecbfca503b21d85486e0b994bd5ea"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.075734    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.101720    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e16838818729d3b0679cd21964fdf47441ebf169a121ac598081429082e9d"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.120274    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91df341636e892cd93c25fa7ad7384bcf2bd819376c32058f4ee8317633ccdb9"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.136641    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f8312902b01b75c8ea80234be416d3ffc9a1089252bd3c6d01a2cd098215be"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.156601    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.157623    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="400ms"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.173261    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3080db261aed80f74241b549711c9e0e8bf8d76726121d9447965ca7e2087"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188271    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-kubeconfig\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188310    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-ca-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188378    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-k8s-certs\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188400    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-certs\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188469    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7a9c45e53018cd74c5a13ccfd96f1479-etcd-data\") pod \"etcd-multinode-720500\" (UID: \"7a9c45e53018cd74c5a13ccfd96f1479\") " pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188506    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f58e384885de6f2352fb028e836ba47f-kubeconfig\") pod \"kube-scheduler-multinode-720500\" (UID: \"f58e384885de6f2352fb028e836ba47f\") " pod="kube-system/kube-scheduler-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188525    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-ca-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188569    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-k8s-certs\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188590    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78d1bd07ad8cdd8611c0b5d7e797ef30-flexvolume-dir\") pod \"kube-controller-manager-multinode-720500\" (UID: \"78d1bd07ad8cdd8611c0b5d7e797ef30\") " pod="kube-system/kube-controller-manager-multinode-720500"
	I0603 14:51:55.895751    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.188614    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9aa17bec6c8b90196f8771e2e5c6391-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-720500\" (UID: \"a9aa17bec6c8b90196f8771e2e5c6391\") " pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.189831    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c98b77811e1a1610a97d2f641597b26b618ffe831fe5ad3ec241b34af76a6b"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.211600    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dbe33ccede837b8bf9917f1f085422d402ca29fcadcc3715a72edb8570a28f0"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.232599    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.233792    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.559275    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="800ms"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: I0603 14:50:34.635611    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.636574    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: W0603 14:50:34.930484    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 kubelet[1525]: E0603 14:50:34.930562    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-720500&limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.013602    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.013737    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.058377    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.058502    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: W0603 14:50:35.276396    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.276674    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.22.154.20:8443: connect: connection refused
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.361658    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-720500?timeout=10s\": dial tcp 172.22.154.20:8443: connect: connection refused" interval="1.6s"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: I0603 14:50:35.437822    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.439455    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.22.154.20:8443: connect: connection refused" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 kubelet[1525]: E0603 14:50:35.759532    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.22.154.20:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-720500.17d5860f76c4d283  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-720500,UID:multinode-720500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-720500,},FirstTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,LastTimestamp:2024-06-03 14:50:33.886954115 +0000 UTC m=+0.172818760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-72
0500,}"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:37 multinode-720500 kubelet[1525]: I0603 14:50:37.041688    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524109    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.524300    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.525714    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.527071    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.528427    1525 setters.go:580] "Node became not ready" node="multinode-720500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-03T14:50:39Z","lastTransitionTime":"2024-06-03T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.569920    1525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-720500\" already exists" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.884500    1525 apiserver.go:52] "Watching apiserver"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889699    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a" podNamespace="kube-system" podName="kube-proxy-64l9x"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.889893    1525 topology_manager.go:215] "Topology Admit Handler" podUID="08ea7c30-4962-4026-8eb0-6864835e97e6" podNamespace="kube-system" podName="kindnet-26s27"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890015    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5d120704-a803-4278-aa7c-32304a6164a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9wpc"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890251    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8380cfdf-9758-4fd8-a511-db50974806a2" podNamespace="kube-system" podName="storage-provisioner"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890408    1525 topology_manager.go:215] "Topology Admit Handler" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef" podNamespace="default" podName="busybox-fc5497c4f-n2t5d"
	I0603 14:51:55.896764    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.890532    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-720500" podUID="a99295b9-ba4f-4b3f-9bc7-3e6e09de9b09"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.890739    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.891991    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.919591    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-720500"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.922418    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947805    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-lib-modules\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947924    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-cni-cfg\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947970    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-xtables-lock\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.947990    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8380cfdf-9758-4fd8-a511-db50974806a2-tmp\") pod \"storage-provisioner\" (UID: \"8380cfdf-9758-4fd8-a511-db50974806a2\") " pod="kube-system/storage-provisioner"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948046    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a-xtables-lock\") pod \"kube-proxy-64l9x\" (UID: \"ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a\") " pod="kube-system/kube-proxy-64l9x"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.948118    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08ea7c30-4962-4026-8eb0-6864835e97e6-lib-modules\") pod \"kindnet-26s27\" (UID: \"08ea7c30-4962-4026-8eb0-6864835e97e6\") " pod="kube-system/kindnet-26s27"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949354    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.949442    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.449414293 +0000 UTC m=+6.735278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.967616    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc25f3659bb9b137f23bf9424dba20e" path="/var/lib/kubelet/pods/2dc25f3659bb9b137f23bf9424dba20e/volumes"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: I0603 14:50:39.969042    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36433239452f37b4b0410f69c12da408" path="/var/lib/kubelet/pods/36433239452f37b4b0410f69c12da408/volumes"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984720    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984802    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 kubelet[1525]: E0603 14:50:39.984886    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:40.484862826 +0000 UTC m=+6.770727471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.019663    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-720500" podStartSLOduration=1.019649758 podStartE2EDuration="1.019649758s" podCreationTimestamp="2024-06-03 14:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:40.018824057 +0000 UTC m=+6.304688702" watchObservedRunningTime="2024-06-03 14:50:40.019649758 +0000 UTC m=+6.305514303"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455710    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.455796    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.455777259 +0000 UTC m=+7.741641804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556713    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556760    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: E0603 14:50:40.556889    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:41.556863952 +0000 UTC m=+7.842728597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 kubelet[1525]: I0603 14:50:40.845891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.271560    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438384    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.438646    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465430    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.465640    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.465616988 +0000 UTC m=+9.751481633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: I0603 14:50:41.502271    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-720500"
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566766    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566801    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.897752    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.566917    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:43.566874981 +0000 UTC m=+9.852739626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961788    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 kubelet[1525]: E0603 14:50:41.961975    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:42 multinode-720500 kubelet[1525]: I0603 14:50:42.520599    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-720500" podUID="aba2d079-d1a9-4a5c-9b9e-1b8a832d37ef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487623    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.487724    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.487705549 +0000 UTC m=+13.773570194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588583    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588739    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.588832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:47.588814442 +0000 UTC m=+13.874678987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961044    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:43 multinode-720500 kubelet[1525]: E0603 14:50:43.961649    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:44 multinode-720500 kubelet[1525]: E0603 14:50:44.044586    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961659    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:45 multinode-720500 kubelet[1525]: E0603 14:50:45.961954    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.521989    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.522196    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.522177172 +0000 UTC m=+21.808041717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.622845    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623053    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.623208    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:50:55.623162574 +0000 UTC m=+21.909027119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962070    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:47 multinode-720500 kubelet[1525]: E0603 14:50:47.962858    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.046385    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.959451    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:49 multinode-720500 kubelet[1525]: E0603 14:50:49.960279    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.960531    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:51 multinode-720500 kubelet[1525]: E0603 14:50:51.961799    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:52 multinode-720500 kubelet[1525]: I0603 14:50:52.534860    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-720500" podStartSLOduration=5.534842522 podStartE2EDuration="5.534842522s" podCreationTimestamp="2024-06-03 14:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-03 14:50:52.533300056 +0000 UTC m=+18.819164701" watchObservedRunningTime="2024-06-03 14:50:52.534842522 +0000 UTC m=+18.820707067"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.960555    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:53 multinode-720500 kubelet[1525]: E0603 14:50:53.961087    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:54 multinode-720500 kubelet[1525]: E0603 14:50:54.048175    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.898762    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600709    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.600890    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.600870216 +0000 UTC m=+37.886734761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701124    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701172    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.701306    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:11.701288915 +0000 UTC m=+37.987153560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.959849    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:55 multinode-720500 kubelet[1525]: E0603 14:50:55.960175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.960559    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:57 multinode-720500 kubelet[1525]: E0603 14:50:57.961245    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.050189    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.962718    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:50:59 multinode-720500 kubelet[1525]: E0603 14:50:59.963597    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.959962    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:01 multinode-720500 kubelet[1525]: E0603 14:51:01.961107    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.960485    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:03 multinode-720500 kubelet[1525]: E0603 14:51:03.961168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:04 multinode-720500 kubelet[1525]: E0603 14:51:04.052718    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960258    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:05 multinode-720500 kubelet[1525]: E0603 14:51:05.960918    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.960257    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:07 multinode-720500 kubelet[1525]: E0603 14:51:07.961704    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.054870    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.962422    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:09 multinode-720500 kubelet[1525]: E0603 14:51:09.963393    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.899769    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.663780    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.664114    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume podName:5d120704-a803-4278-aa7c-32304a6164a3 nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.66409273 +0000 UTC m=+69.949957275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d120704-a803-4278-aa7c-32304a6164a3-config-volume") pod "coredns-7db6d8ff4d-c9wpc" (UID: "5d120704-a803-4278-aa7c-32304a6164a3") : object "kube-system"/"coredns" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764900    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.764958    1525 projected.go:200] Error preparing data for projected volume kube-api-access-b5kjf for pod default/busybox-fc5497c4f-n2t5d: object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.765022    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf podName:5a2e152e-3390-4e7e-bcad-d3464a08ffef nodeName:}" failed. No retries permitted until 2024-06-03 14:51:43.765005046 +0000 UTC m=+70.050869691 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5kjf" (UniqueName: "kubernetes.io/projected/5a2e152e-3390-4e7e-bcad-d3464a08ffef-kube-api-access-b5kjf") pod "busybox-fc5497c4f-n2t5d" (UID: "5a2e152e-3390-4e7e-bcad-d3464a08ffef") : object "default"/"kube-root-ca.crt" not registered
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962142    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 kubelet[1525]: E0603 14:51:11.962815    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896193    1525 scope.go:117] "RemoveContainer" containerID="097ab9a9a33bbee7997d827b04c2900ded8d532f232d924bb9d84ecc302ec8b8"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: I0603 14:51:12.896857    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:12 multinode-720500 kubelet[1525]: E0603 14:51:12.897037    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8380cfdf-9758-4fd8-a511-db50974806a2)\"" pod="kube-system/storage-provisioner" podUID="8380cfdf-9758-4fd8-a511-db50974806a2"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.960835    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:13 multinode-720500 kubelet[1525]: E0603 14:51:13.961713    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:14 multinode-720500 kubelet[1525]: E0603 14:51:14.056993    1525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.959976    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:15 multinode-720500 kubelet[1525]: E0603 14:51:15.961758    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-n2t5d" podUID="5a2e152e-3390-4e7e-bcad-d3464a08ffef"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:17 multinode-720500 kubelet[1525]: E0603 14:51:17.963475    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-c9wpc" podUID="5d120704-a803-4278-aa7c-32304a6164a3"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	I0603 14:51:55.900754    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	I0603 14:51:55.946760    9752 logs.go:123] Gathering logs for kube-proxy [42926c33070c] ...
	I0603 14:51:55.946760    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42926c33070c"
	I0603 14:51:55.978561    9752 command_runner.go:130] ! I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:55.978630    9752 command_runner.go:130] ! I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:51:55.978808    9752 command_runner.go:130] ! I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:55.979038    9752 command_runner.go:130] ! I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:55.979154    9752 command_runner.go:130] ! I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:55.979754    9752 command_runner.go:130] ! I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:55.980763    9752 command_runner.go:130] ! I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:51:55.981553    9752 command_runner.go:130] ! I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:55.981628    9752 command_runner.go:130] ! I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:51:55.981694    9752 command_runner.go:130] ! I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:55.981728    9752 command_runner.go:130] ! I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:55.981794    9752 command_runner.go:130] ! I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:55.981811    9752 command_runner.go:130] ! I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:55.983901    9752 logs.go:123] Gathering logs for kube-proxy [3823f2e2bdb2] ...
	I0603 14:51:55.983901    9752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3823f2e2bdb2"
	I0603 14:51:56.009504    9752 command_runner.go:130] ! I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:51:56.009504    9752 command_runner.go:130] ! I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:51:56.010051    9752 command_runner.go:130] ! I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:51:56.010154    9752 command_runner.go:130] ! I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:51:56.010208    9752 command_runner.go:130] ! I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:51:56.010231    9752 command_runner.go:130] ! I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:51:56.010297    9752 command_runner.go:130] ! I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:51:56.010391    9752 command_runner.go:130] ! I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:51:56.012642    9752 logs.go:123] Gathering logs for Docker ...
	I0603 14:51:56.012642    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:05 minikube cri-dockerd[224]: time="2024-06-03T14:49:05Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:06 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube cri-dockerd[410]: time="2024-06-03T14:49:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.044565    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:08 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube cri-dockerd[430]: time="2024-06-03T14:49:10Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:10 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.547305957Z" level=info msg="Starting up"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.548486369Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[657]: time="2024-06-03T14:49:57.550163087Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=663
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.588439684Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615622567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615812869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615892669Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.615996071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616816479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.616941980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617127782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617266784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617291284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617304084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.617934891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.618718299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621568528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.045556    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621673229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.621927432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622026433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622569239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622740941Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.622759241Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634889967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.634987368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635019568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:56.046554    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635037868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635068969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635139569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635454873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635562874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635584474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635599174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635613674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635627574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635643175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635663175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635679475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635693275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635706375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635718075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635850277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635881177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635899277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.047561    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635929077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635942078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635954478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635967678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635981078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.635996378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636009278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636021378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636050579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636066579Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636087279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636101979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636113679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636360182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636390182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636405182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:56.048558    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636417883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636428083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636445483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636457683Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.636895188Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637062689Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:56.049559    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637110790Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:57 multinode-720500 dockerd[663]: time="2024-06-03T14:49:57.637130090Z" level=info msg="containerd successfully booted in 0.051012s"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.605269655Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:58 multinode-720500 dockerd[657]: time="2024-06-03T14:49:58.830205845Z" level=info msg="Loading containers: start."
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.290763156Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.371043862Z" level=info msg="Loading containers: done."
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.398495238Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.399429147Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454347399Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 dockerd[657]: time="2024-06-03T14:49:59.454526701Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:49:59 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 systemd[1]: Stopping Docker Application Container Engine...
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.502444000Z" level=info msg="Processing signal 'terminated'"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.507803805Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508158405Z" level=info msg="Daemon shutdown complete"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508284905Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:25 multinode-720500 dockerd[657]: time="2024-06-03T14:50:25.508315705Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: docker.service: Deactivated successfully.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Stopped Docker Application Container Engine.
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 systemd[1]: Starting Docker Application Container Engine...
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.581999493Z" level=info msg="Starting up"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.582971494Z" level=info msg="containerd not running, starting managed containerd"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:26.586955297Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1060
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.619972528Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642740749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.642897349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643057949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643079049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643105249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643117549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643236149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643414849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643436249Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643446349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.050568    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643469050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.643579550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646283452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646409552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646539152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646683652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.646720152Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.647911754Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648009354Z" level=info msg="metadata content store policy set" policy=shared
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648261654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648362554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648383154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648399754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648413954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.648460954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649437555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649582355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649628755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649649855Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649667455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649683955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649698955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649721455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649742255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649758455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649834555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.649964955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650022156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650042056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650059256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650077256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650091456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650109256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650125756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650143656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650161256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650181156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650384856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650434256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650459456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.051559    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650483856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650511256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650529056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650544556Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650596756Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650696356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650722156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650741356Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650755156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650769156Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.650940656Z" level=info msg="NRI interface is disabled by configuration."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652184258Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652391658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652570358Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:26.652616758Z" level=info msg="containerd successfully booted in 0.035610s"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.629822557Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.661126586Z" level=info msg="Loading containers: start."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:27 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:27.933266636Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.024107020Z" level=info msg="Loading containers: done."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.055971749Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.056192749Z" level=info msg="Daemon has completed initialization"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104434794Z" level=info msg="API listen on /var/run/docker.sock"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 dockerd[1054]: time="2024-06-03T14:50:28.104654694Z" level=info msg="API listen on [::]:2376"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:28 multinode-720500 systemd[1]: Started Docker Application Container Engine.
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start docker client with request timeout 0s"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Loaded network plugin cni"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:29Z" level=info msg="Start cri-dockerd grpc backend"
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:29 multinode-720500 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-c9wpc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"1ac710138e878688a914e49a9c19704bcae5ab056cf62c95cea7295c3ad0bc6a\""
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-n2t5d_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"e2a9c5dc3b1b023c47092aa3275bb5237a5b24f6a82046a53a57ad3155f0f8d0\""
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786808143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.786968543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.787857244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.788128044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.878884027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882292830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882532331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.052564    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.882658231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.964961706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965059107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965073207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.053891    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:34.965170307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:34 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0461b752e72814194a3ff0778ad4897f646990c90f8c3fcfb9c28be750bfab15/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.004294343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006505445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.006802445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.007209145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29feb700b8ebf36a5e533c2d019afb67137df3c39cd996736aba2eea6197e1b3/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e60bc15f541ebe44a8b2d1cc1a4a878d35fac3b2b8b23ad5b59ae6a7c18fa90/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/192b150e443d2d545d193223f6cdc02bc60fa88f9e646c72e84cad439aec3645/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330597043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330771943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330809243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.330940843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.411710918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412168918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412399218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.412596918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.543921039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544077939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544114939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.544224939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547915343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547962443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.547974143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:35 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:35.548055043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:39 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596002188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596253788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.054559    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596401388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.596628788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633733423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633807223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633821423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.633921623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665408852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665567252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665590052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:40.665814152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ae2b089ecf3ba840b08192449967b2406f6c6d0d8a56a114ddaabc35e3c7ee5/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:40 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4a4ad712a66e8ac5a3ba6d988006318e7c0932c2ad0e4ce9838e7a98695f555/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147693095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.147891096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148071396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.148525196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236102677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236209377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236229077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.236423777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:50:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a3698c141b11639f71ba16cbcb832e7c02097b07aaf307ba72c7cf41a64d9dde/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.541976658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542524859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.542803559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:50:41 multinode-720500 dockerd[1060]: time="2024-06-03T14:50:41.545377661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1054]: time="2024-06-03T14:51:11.898791571Z" level=info msg="ignoring event" container=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.899973164Z" level=info msg="shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900143563Z" level=warning msg="cleaning up after shim disconnected" id=2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566 namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:11 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:11.900158663Z" level=info msg="cleaning up dead shim" namespace=moby
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147466127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.055542    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147614527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.147634527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:26 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:26.148526626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.314851642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315085942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.315407842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.320950643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354750647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354889547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.354906247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.355401447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 cri-dockerd[1279]: time="2024-06-03T14:51:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7/resolv.conf as [nameserver 172.22.144.1]"
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894225423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894606924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894797424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.894956925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942044061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.942892263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943014363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:44 multinode-720500 dockerd[1060]: time="2024-06-03T14:51:44.943428065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:47 multinode-720500 dockerd[1054]: 2024/06/03 14:51:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.056560    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:56.057555    9752 command_runner.go:130] > Jun 03 14:51:56 multinode-720500 dockerd[1054]: 2024/06/03 14:51:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0603 14:51:58.597456    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:58.597456    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.597456    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.597456    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.602881    9752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 14:51:58.603886    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Audit-Id: 2adbffed-296b-4ad2-802f-cba40c2a9b63
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.603886    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.603886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.603886    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.604193    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86572 chars]
	I0603 14:51:58.608947    9752 system_pods.go:59] 12 kube-system pods found
	I0603 14:51:58.608947    9752 system_pods.go:61] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-apiserver-multinode-720500" [b27b9256-3c5b-4432-8a9e-ebe5303b88f0] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:51:58.608947    9752 system_pods.go:61] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:51:58.608947    9752 system_pods.go:74] duration metric: took 3.7042213s to wait for pod list to return data ...
	I0603 14:51:58.608947    9752 default_sa.go:34] waiting for default service account to be created ...
	I0603 14:51:58.608947    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/default/serviceaccounts
	I0603 14:51:58.609930    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.609967    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.609967    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.612887    9752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 14:51:58.612887    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.612887    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.612887    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Content-Length: 262
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Audit-Id: 393e5682-f954-4ea9-b887-c1f2e4a42b19
	I0603 14:51:58.612887    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.612887    9752 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fbd8badf-59ec-4931-b3bf-13e96cb86c7b","resourceVersion":"352","creationTimestamp":"2024-06-03T14:27:32Z"}}]}
	I0603 14:51:58.613347    9752 default_sa.go:45] found service account: "default"
	I0603 14:51:58.613347    9752 default_sa.go:55] duration metric: took 4.4004ms for default service account to be created ...
	I0603 14:51:58.613347    9752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 14:51:58.613347    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/namespaces/kube-system/pods
	I0603 14:51:58.613347    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.613347    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.613347    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.617942    9752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 14:51:58.617942    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.617942    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.617942    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.617942    9752 round_trippers.go:580]     Audit-Id: d88e58c6-926c-4a33-a21c-d625a32ba7cc
	I0603 14:51:58.619012    9752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-c9wpc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"5d120704-a803-4278-aa7c-32304a6164a3","resourceVersion":"1984","creationTimestamp":"2024-06-03T14:27:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-03T14:27:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e7b4682-2ba5-4392-bfcb-7bb728e8e9be\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86572 chars]
	I0603 14:51:58.622343    9752 system_pods.go:86] 12 kube-system pods found
	I0603 14:51:58.622343    9752 system_pods.go:89] "coredns-7db6d8ff4d-c9wpc" [5d120704-a803-4278-aa7c-32304a6164a3] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "etcd-multinode-720500" [1a2533a2-16e9-4696-9694-186579c52b55] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-26s27" [08ea7c30-4962-4026-8eb0-6864835e97e6] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-fmfz2" [78515e23-16d2-4a8e-9845-375aa17ab80b] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kindnet-h58hc" [43c48b16-ca18-4ce1-9a34-be58cc0c981b] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-apiserver-multinode-720500" [b27b9256-3c5b-4432-8a9e-ebe5303b88f0] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-controller-manager-multinode-720500" [6ba9c1e5-75bb-4731-9105-49acbbf3f237] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-64l9x" [ef28f2ab-ff97-468f-8b61-a9a0e1a1a03a] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-ctm5l" [38069b1b-8ba9-46af-b4e7-7add5d9c67fc] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-proxy-sm9rr" [4f0321c0-f47d-463e-bda2-919f37735748] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "kube-scheduler-multinode-720500" [9d420d28-dde0-4504-a4d4-f840cab56ebe] Running
	I0603 14:51:58.622343    9752 system_pods.go:89] "storage-provisioner" [8380cfdf-9758-4fd8-a511-db50974806a2] Running
	I0603 14:51:58.622343    9752 system_pods.go:126] duration metric: took 8.9956ms to wait for k8s-apps to be running ...
	I0603 14:51:58.622343    9752 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 14:51:58.635441    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:51:58.660474    9752 system_svc.go:56] duration metric: took 38.1304ms WaitForService to wait for kubelet
	I0603 14:51:58.660474    9752 kubeadm.go:576] duration metric: took 1m14.8709263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 14:51:58.660474    9752 node_conditions.go:102] verifying NodePressure condition ...
	I0603 14:51:58.660650    9752 round_trippers.go:463] GET https://172.22.154.20:8443/api/v1/nodes
	I0603 14:51:58.660709    9752 round_trippers.go:469] Request Headers:
	I0603 14:51:58.660709    9752 round_trippers.go:473]     Accept: application/json, */*
	I0603 14:51:58.660709    9752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0603 14:51:58.664465    9752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 14:51:58.664465    9752 round_trippers.go:577] Response Headers:
	I0603 14:51:58.664465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 63258ec6-5939-4b2c-90b7-1dac9a067f10
	I0603 14:51:58.664465    9752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0e670498-f123-4985-b14e-d20886b627c6
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Date: Mon, 03 Jun 2024 14:51:58 GMT
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Audit-Id: 9b0c43f2-4a5a-4b3f-bdf5-ddc7fe069877
	I0603 14:51:58.664465    9752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0603 14:51:58.664702    9752 round_trippers.go:580]     Content-Type: application/json
	I0603 14:51:58.664799    9752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1997"},"items":[{"metadata":{"name":"multinode-720500","uid":"91acd4fc-6bce-4e6c-a02f-906aa279b7b0","resourceVersion":"1958","creationTimestamp":"2024-06-03T14:27:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-720500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3e16338a2e51863cb2fad83b163378f045b3a354","minikube.k8s.io/name":"multinode-720500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_03T14_27_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0603 14:51:58.666485    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666485    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666599    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 14:51:58.666599    9752 node_conditions.go:123] node cpu capacity is 2
	I0603 14:51:58.666599    9752 node_conditions.go:105] duration metric: took 6.1255ms to run NodePressure ...
	I0603 14:51:58.666599    9752 start.go:240] waiting for startup goroutines ...
	I0603 14:51:58.666703    9752 start.go:245] waiting for cluster config update ...
	I0603 14:51:58.666703    9752 start.go:254] writing updated cluster config ...
	I0603 14:51:58.671202    9752 out.go:177] 
	I0603 14:51:58.678379    9752 config.go:182] Loaded profile config "ha-149700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:51:58.687586    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:51:58.687586    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:51:58.696211    9752 out.go:177] * Starting "multinode-720500-m02" worker node in "multinode-720500" cluster
	I0603 14:51:58.699076    9752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 14:51:58.699076    9752 cache.go:56] Caching tarball of preloaded images
	I0603 14:51:58.699076    9752 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0603 14:51:58.699076    9752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0603 14:51:58.699076    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:51:58.701385    9752 start.go:360] acquireMachinesLock for multinode-720500-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:51:58.701385    9752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-720500-m02"
	I0603 14:51:58.701385    9752 start.go:96] Skipping create...Using existing machine configuration
	I0603 14:51:58.701385    9752 fix.go:54] fixHost starting: m02
	I0603 14:51:58.702494    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:00.924858    9752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 14:52:00.925086    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:00.925086    9752 fix.go:112] recreateIfNeeded on multinode-720500-m02: state=Stopped err=<nil>
	W0603 14:52:00.925086    9752 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 14:52:00.928550    9752 out.go:177] * Restarting existing hyperv VM for "multinode-720500-m02" ...
	I0603 14:52:00.936694    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-720500-m02
	I0603 14:52:04.044966    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:04.044966    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:04.047795    9752 main.go:141] libmachine: Waiting for host to start...
	I0603 14:52:04.048214    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:06.378075    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:08.970262    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:08.970473    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:09.984158    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:12.243565    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:12.243565    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:12.243730    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:14.815718    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:14.815750    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:15.830056    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:18.058768    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:18.059688    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:18.059746    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:20.661241    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:20.662221    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:21.665405    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:23.930478    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:23.930537    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:23.930758    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:26.539332    9752 main.go:141] libmachine: [stdout =====>] : 
	I0603 14:52:26.539332    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:27.553638    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:29.836618    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:29.837446    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:29.837446    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:32.494172    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:32.494212    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:32.496860    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:34.681231    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:34.681231    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:34.681644    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:37.292848    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:37.292848    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:37.293061    9752 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500\config.json ...
	I0603 14:52:37.296274    9752 machine.go:94] provisionDockerMachine start ...
	I0603 14:52:37.296274    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:39.478747    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:39.478747    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:39.478883    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:42.059546    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:42.059546    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:42.065913    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:42.065979    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:42.065979    9752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 14:52:42.190260    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 14:52:42.190260    9752 buildroot.go:166] provisioning hostname "multinode-720500-m02"
	I0603 14:52:42.190406    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:44.334883    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:44.335874    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:44.335874    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:46.891967    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:46.891967    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:46.898239    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:46.899050    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:46.899050    9752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-720500-m02 && echo "multinode-720500-m02" | sudo tee /etc/hostname
	I0603 14:52:47.052489    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-720500-m02
	
	I0603 14:52:47.052489    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:49.203382    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:51.772847    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:51.773530    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:51.779804    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:52:51.780383    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:52:51.780383    9752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-720500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-720500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-720500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:52:51.921583    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:52:51.921583    9752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0603 14:52:51.921583    9752 buildroot.go:174] setting up certificates
	I0603 14:52:51.921583    9752 provision.go:84] configureAuth start
	I0603 14:52:51.921583    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:54.067716    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:52:56.615390    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:52:56.615390    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:56.616263    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:52:58.751125    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:52:58.751125    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:52:58.751996    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:01.338340    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:01.338340    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:01.338340    9752 provision.go:143] copyHostCerts
	I0603 14:53:01.339387    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0603 14:53:01.339943    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0603 14:53:01.339943    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0603 14:53:01.340326    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0603 14:53:01.341593    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0603 14:53:01.341799    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0603 14:53:01.341913    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0603 14:53:01.342344    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0603 14:53:01.343448    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0603 14:53:01.343724    9752 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0603 14:53:01.343867    9752 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0603 14:53:01.344149    9752 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0603 14:53:01.345161    9752 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-720500-m02 san=[127.0.0.1 172.22.149.253 localhost minikube multinode-720500-m02]
	I0603 14:53:01.434282    9752 provision.go:177] copyRemoteCerts
	I0603 14:53:01.449343    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:53:01.449343    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:03.627583    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:03.627583    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:03.628011    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:06.208381    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:06.208381    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:06.208381    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:06.306002    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8566202s)
	I0603 14:53:06.306002    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0603 14:53:06.306002    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:53:06.354488    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0603 14:53:06.354898    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0603 14:53:06.405399    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0603 14:53:06.405399    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 14:53:06.461413    9752 provision.go:87] duration metric: took 14.5397128s to configureAuth
	I0603 14:53:06.461413    9752 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:53:06.462466    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:06.462634    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:08.674379    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:08.675292    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:08.675292    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:11.235594    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:11.235594    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:11.241616    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:11.241742    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:11.241742    9752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0603 14:53:11.364326    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0603 14:53:11.364403    9752 buildroot.go:70] root file system type: tmpfs
	I0603 14:53:11.364662    9752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0603 14:53:11.364773    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:13.498365    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:13.498464    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:13.498464    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:16.042981    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:16.042981    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:16.049551    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:16.050096    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:16.050096    9752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.22.154.20"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0603 14:53:16.201264    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.22.154.20
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0603 14:53:16.201264    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:18.376783    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:18.378078    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:18.378153    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:20.959655    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:20.960474    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:20.966200    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:20.966736    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:20.966736    9752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0603 14:53:23.264026    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0603 14:53:23.264026    9752 machine.go:97] duration metric: took 45.9673791s to provisionDockerMachine
	I0603 14:53:23.264026    9752 start.go:293] postStartSetup for "multinode-720500-m02" (driver="hyperv")
	I0603 14:53:23.264026    9752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:53:23.276578    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:53:23.276578    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:25.434580    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:25.435367    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:25.435367    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:28.003954    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:28.003954    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:28.005193    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:28.118091    9752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8414732s)
	I0603 14:53:28.130081    9752 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:53:28.139093    9752 command_runner.go:130] > NAME=Buildroot
	I0603 14:53:28.139280    9752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 14:53:28.139280    9752 command_runner.go:130] > ID=buildroot
	I0603 14:53:28.139280    9752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 14:53:28.139280    9752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 14:53:28.139280    9752 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:53:28.139280    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0603 14:53:28.139969    9752 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0603 14:53:28.140696    9752 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> 105442.pem in /etc/ssl/certs
	I0603 14:53:28.140696    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /etc/ssl/certs/105442.pem
	I0603 14:53:28.155361    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:53:28.176021    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /etc/ssl/certs/105442.pem (1708 bytes)
	I0603 14:53:28.220503    9752 start.go:296] duration metric: took 4.9564372s for postStartSetup
	I0603 14:53:28.220724    9752 fix.go:56] duration metric: took 1m29.5186123s for fixHost
	I0603 14:53:28.220856    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:30.376443    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:32.957532    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:32.957981    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:32.963744    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:32.964505    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:32.964505    9752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:53:33.093612    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717426413.100380487
	
	I0603 14:53:33.093732    9752 fix.go:216] guest clock: 1717426413.100380487
	I0603 14:53:33.093732    9752 fix.go:229] Guest: 2024-06-03 14:53:33.100380487 +0000 UTC Remote: 2024-06-03 14:53:28.2207248 +0000 UTC m=+299.350066901 (delta=4.879655687s)
	I0603 14:53:33.093850    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:35.201749    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:35.202147    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:35.202147    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:37.790908    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:37.790908    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:37.797165    9752 main.go:141] libmachine: Using SSH client type: native
	I0603 14:53:37.797165    9752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71a4a0] 0x71d080 <nil>  [] 0s} 172.22.149.253 22 <nil> <nil>}
	I0603 14:53:37.797776    9752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717426413
	I0603 14:53:37.931180    9752 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jun  3 14:53:33 UTC 2024
	
	I0603 14:53:37.931304    9752 fix.go:236] clock set: Mon Jun  3 14:53:33 UTC 2024
	 (err=<nil>)
	I0603 14:53:37.931304    9752 start.go:83] releasing machines lock for "multinode-720500-m02", held for 1m39.2291131s
	I0603 14:53:37.931427    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:40.065225    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:40.065758    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:40.065758    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:42.574215    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:42.575221    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:42.580055    9752 out.go:177] * Found network options:
	I0603 14:53:42.583237    9752 out.go:177]   - NO_PROXY=172.22.154.20
	W0603 14:53:42.584517    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:53:42.587020    9752 out.go:177]   - NO_PROXY=172.22.154.20
	W0603 14:53:42.589996    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 14:53:42.591046    9752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 14:53:42.593813    9752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:53:42.593813    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:42.603476    9752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 14:53:42.603476    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:53:44.803724    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:44.803818    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:44.803818    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:44.848516    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:44.848516    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:44.848642    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:53:47.515409    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:47.515409    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:47.515409    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:47.538501    9752 main.go:141] libmachine: [stdout =====>] : 172.22.149.253
	
	I0603 14:53:47.538501    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:47.539516    9752 sshutil.go:53] new ssh client: &{IP:172.22.149.253 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:53:47.704412    9752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 14:53:47.704533    9752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1106787s)
	I0603 14:53:47.704592    9752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0603 14:53:47.704592    9752 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1010745s)
	W0603 14:53:47.704592    9752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:53:47.715437    9752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:53:47.746454    9752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0603 14:53:47.747215    9752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:53:47.747215    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:53:47.747461    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:53:47.784875    9752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0603 14:53:47.798913    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0603 14:53:47.828886    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0603 14:53:47.847234    9752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0603 14:53:47.860461    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0603 14:53:47.891558    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:53:47.923422    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0603 14:53:47.954071    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0603 14:53:47.989321    9752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:53:48.025299    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0603 14:53:48.058121    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0603 14:53:48.092417    9752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0603 14:53:48.127212    9752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:53:48.145707    9752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 14:53:48.158930    9752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:53:48.193873    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:48.393293    9752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0603 14:53:48.427243    9752 start.go:494] detecting cgroup driver to use...
	I0603 14:53:48.440210    9752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0603 14:53:48.463459    9752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0603 14:53:48.463459    9752 command_runner.go:130] > [Unit]
	I0603 14:53:48.463459    9752 command_runner.go:130] > Description=Docker Application Container Engine
	I0603 14:53:48.463459    9752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0603 14:53:48.463459    9752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0603 14:53:48.463459    9752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0603 14:53:48.463459    9752 command_runner.go:130] > StartLimitBurst=3
	I0603 14:53:48.463459    9752 command_runner.go:130] > StartLimitIntervalSec=60
	I0603 14:53:48.463459    9752 command_runner.go:130] > [Service]
	I0603 14:53:48.463459    9752 command_runner.go:130] > Type=notify
	I0603 14:53:48.463459    9752 command_runner.go:130] > Restart=on-failure
	I0603 14:53:48.463459    9752 command_runner.go:130] > Environment=NO_PROXY=172.22.154.20
	I0603 14:53:48.463459    9752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0603 14:53:48.463459    9752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0603 14:53:48.463459    9752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0603 14:53:48.463459    9752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0603 14:53:48.463459    9752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0603 14:53:48.464025    9752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0603 14:53:48.464025    9752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0603 14:53:48.464025    9752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0603 14:53:48.464082    9752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0603 14:53:48.464116    9752 command_runner.go:130] > ExecStart=
	I0603 14:53:48.464116    9752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0603 14:53:48.464154    9752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0603 14:53:48.464195    9752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0603 14:53:48.464229    9752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0603 14:53:48.464229    9752 command_runner.go:130] > LimitNOFILE=infinity
	I0603 14:53:48.464314    9752 command_runner.go:130] > LimitNPROC=infinity
	I0603 14:53:48.464314    9752 command_runner.go:130] > LimitCORE=infinity
	I0603 14:53:48.464337    9752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0603 14:53:48.464337    9752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0603 14:53:48.464337    9752 command_runner.go:130] > TasksMax=infinity
	I0603 14:53:48.464395    9752 command_runner.go:130] > TimeoutStartSec=0
	I0603 14:53:48.464395    9752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0603 14:53:48.464428    9752 command_runner.go:130] > Delegate=yes
	I0603 14:53:48.464458    9752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0603 14:53:48.464458    9752 command_runner.go:130] > KillMode=process
	I0603 14:53:48.464458    9752 command_runner.go:130] > [Install]
	I0603 14:53:48.464458    9752 command_runner.go:130] > WantedBy=multi-user.target
	I0603 14:53:48.478554    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:53:48.514172    9752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:53:48.565797    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:53:48.602508    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:53:48.642096    9752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0603 14:53:48.697682    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0603 14:53:48.722494    9752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:53:48.756161    9752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0603 14:53:48.774650    9752 ssh_runner.go:195] Run: which cri-dockerd
	I0603 14:53:48.780598    9752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0603 14:53:48.791952    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0603 14:53:48.809113    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0603 14:53:48.853247    9752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0603 14:53:49.053457    9752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0603 14:53:49.246160    9752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0603 14:53:49.246321    9752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0603 14:53:49.290669    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:49.487975    9752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0603 14:53:52.111216    9752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6231579s)
	I0603 14:53:52.122712    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0603 14:53:52.160406    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:53:52.199360    9752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0603 14:53:52.417094    9752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0603 14:53:52.621731    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:52.841269    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0603 14:53:52.883968    9752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0603 14:53:52.920189    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:53.134247    9752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0603 14:53:53.244024    9752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0603 14:53:53.256425    9752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0603 14:53:53.265046    9752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0603 14:53:53.265046    9752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 14:53:53.265046    9752 command_runner.go:130] > Device: 0,22	Inode: 861         Links: 1
	I0603 14:53:53.265046    9752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0603 14:53:53.265046    9752 command_runner.go:130] > Access: 2024-06-03 14:53:53.168530342 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] > Modify: 2024-06-03 14:53:53.168530342 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] > Change: 2024-06-03 14:53:53.172530347 +0000
	I0603 14:53:53.265046    9752 command_runner.go:130] >  Birth: -
	I0603 14:53:53.265046    9752 start.go:562] Will wait 60s for crictl version
	I0603 14:53:53.277615    9752 ssh_runner.go:195] Run: which crictl
	I0603 14:53:53.283882    9752 command_runner.go:130] > /usr/bin/crictl
	I0603 14:53:53.296082    9752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:53:53.348127    9752 command_runner.go:130] > Version:  0.1.0
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeName:  docker
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0603 14:53:53.349006    9752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 14:53:53.349006    9752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0603 14:53:53.359180    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:53:53.390758    9752 command_runner.go:130] > 26.0.2
	I0603 14:53:53.401578    9752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0603 14:53:53.430671    9752 command_runner.go:130] > 26.0.2
	I0603 14:53:53.435918    9752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0603 14:53:53.438746    9752 out.go:177]   - env NO_PROXY=172.22.154.20
	I0603 14:53:53.443234    9752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0603 14:53:53.447613    9752 ip.go:207] Found interface: {Index:18 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:ab:ea:47 Flags:up|broadcast|multicast|running}
	I0603 14:53:53.450614    9752 ip.go:210] interface addr: fe80::7e99:5c72:564a:df0/64
	I0603 14:53:53.450614    9752 ip.go:210] interface addr: 172.22.144.1/20
	I0603 14:53:53.464382    9752 ssh_runner.go:195] Run: grep 172.22.144.1	host.minikube.internal$ /etc/hosts
	I0603 14:53:53.470729    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.22.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:53:53.492517    9752 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:53:53.493207    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:53.493740    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:53:55.642893    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:55.642893    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:55.643442    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:55.644221    9752 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-720500 for IP: 172.22.149.253
	I0603 14:53:55.644265    9752 certs.go:194] generating shared ca certs ...
	I0603 14:53:55.644298    9752 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:53:55.645064    9752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0603 14:53:55.645182    9752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0603 14:53:55.645744    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 14:53:55.646053    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 14:53:55.646664    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem (1338 bytes)
	W0603 14:53:55.646664    9752 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544_empty.pem, impossibly tiny 0 bytes
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0603 14:53:55.647253    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0603 14:53:55.647866    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0603 14:53:55.648584    9752 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem (1708 bytes)
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem -> /usr/share/ca-certificates/10544.pem
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem -> /usr/share/ca-certificates/105442.pem
	I0603 14:53:55.648764    9752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:55.649285    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:53:55.702334    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:53:55.752104    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:53:55.798483    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0603 14:53:55.845865    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\10544.pem --> /usr/share/ca-certificates/10544.pem (1338 bytes)
	I0603 14:53:55.890471    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\105442.pem --> /usr/share/ca-certificates/105442.pem (1708 bytes)
	I0603 14:53:55.933517    9752 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:53:55.991142    9752 ssh_runner.go:195] Run: openssl version
	I0603 14:53:56.000144    9752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 14:53:56.012460    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10544.pem && ln -fs /usr/share/ca-certificates/10544.pem /etc/ssl/certs/10544.pem"
	I0603 14:53:56.043637    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.053075    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.053075    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:41 /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.066794    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10544.pem
	I0603 14:53:56.075622    9752 command_runner.go:130] > 51391683
	I0603 14:53:56.088499    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10544.pem /etc/ssl/certs/51391683.0"
	I0603 14:53:56.120186    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105442.pem && ln -fs /usr/share/ca-certificates/105442.pem /etc/ssl/certs/105442.pem"
	I0603 14:53:56.157251    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.164755    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.164755    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:41 /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.176836    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105442.pem
	I0603 14:53:56.185553    9752 command_runner.go:130] > 3ec20f2e
	I0603 14:53:56.198458    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105442.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:53:56.230025    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:53:56.262595    9752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.270502    9752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.270602    9752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.282789    9752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:53:56.291451    9752 command_runner.go:130] > b5213941
	I0603 14:53:56.303855    9752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:53:56.336817    9752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:53:56.342957    9752 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:53:56.342957    9752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:53:56.342957    9752 kubeadm.go:928] updating node {m02 172.22.149.253 8443 v1.30.1 docker false true} ...
	I0603 14:53:56.342957    9752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-720500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.22.149.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:53:56.355056    9752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubeadm
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubectl
	I0603 14:53:56.375351    9752 command_runner.go:130] > kubelet
	I0603 14:53:56.375351    9752 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:53:56.387278    9752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0603 14:53:56.404379    9752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0603 14:53:56.435350    9752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:53:56.480805    9752 ssh_runner.go:195] Run: grep 172.22.154.20	control-plane.minikube.internal$ /etc/hosts
	I0603 14:53:56.490209    9752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.22.154.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:53:56.526496    9752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:53:56.747019    9752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:53:56.779998    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:56.780882    9752 start.go:316] joinCluster: &{Name:multinode-720500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-720500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.22.154.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.22.149.253 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.22.151.134 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:53:56.781145    9752 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.22.149.253 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0603 14:53:56.781145    9752 host.go:66] Checking if "multinode-720500-m02" exists ...
	I0603 14:53:56.781863    9752 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:53:56.782528    9752 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:53:56.783026    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:53:59.028919    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:53:59.029330    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:53:59.029330    9752 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:53:59.029932    9752 api_server.go:166] Checking apiserver status ...
	I0603 14:53:59.042835    9752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:53:59.042835    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:54:01.265059    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:54:03.879463    9752 main.go:141] libmachine: [stdout =====>] : 172.22.154.20
	
	I0603 14:54:03.879463    9752 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:54:03.879712    9752 sshutil.go:53] new ssh client: &{IP:172.22.154.20 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:54:03.992356    9752 command_runner.go:130] > 1877
	I0603 14:54:03.992489    9752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9496137s)
	I0603 14:54:04.008380    9752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	W0603 14:54:04.029059    9752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 14:54:04.042353    9752 ssh_runner.go:195] Run: ls
	I0603 14:54:04.050957    9752 api_server.go:253] Checking apiserver healthz at https://172.22.154.20:8443/healthz ...
	I0603 14:54:04.057746    9752 api_server.go:279] https://172.22.154.20:8443/healthz returned 200:
	ok
	I0603 14:54:04.070207    9752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-720500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0603 14:54:04.255055    9752 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-fmfz2, kube-system/kube-proxy-sm9rr
	I0603 14:54:07.287734    9752 command_runner.go:130] > node/multinode-720500-m02 cordoned
	I0603 14:54:07.287734    9752 command_runner.go:130] > pod "busybox-fc5497c4f-mjhcf" has DeletionTimestamp older than 1 seconds, skipping
	I0603 14:54:07.287734    9752 command_runner.go:130] > node/multinode-720500-m02 drained
	I0603 14:54:07.287999    9752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-720500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2177653s)
	I0603 14:54:07.288088    9752 node.go:128] successfully drained node "multinode-720500-m02"
	I0603 14:54:07.288155    9752 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0603 14:54:07.288250    9752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	
	
	==> Docker <==
	Jun 03 14:51:48 multinode-720500 dockerd[1054]: 2024/06/03 14:51:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:51 multinode-720500 dockerd[1054]: 2024/06/03 14:51:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:52 multinode-720500 dockerd[1054]: 2024/06/03 14:51:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:55 multinode-720500 dockerd[1054]: 2024/06/03 14:51:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 03 14:51:56 multinode-720500 dockerd[1054]: 2024/06/03 14:51:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9b260d61dfbd       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   1bc1567075734       coredns-7db6d8ff4d-c9wpc
	291b656660b4b       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   526c48b9021d6       busybox-fc5497c4f-n2t5d
	c81abdbb29c7c       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   b4a4ad712a66e       storage-provisioner
	008dec75d90c7       ac1c61439df46                                                                                         3 minutes ago       Running             kindnet-cni               1                   a3698c141b116       kindnet-26s27
	2061be0913b2b       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   b4a4ad712a66e       storage-provisioner
	42926c33070ce       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                1                   2ae2b089ecf3b       kube-proxy-64l9x
	885576ffcadd7       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   192b150e443d2       kube-apiserver-multinode-720500
	480ef64cfa226       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   3e60bc15f541e       etcd-multinode-720500
	f14b3b67d8f28       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   1                   29feb700b8ebf       kube-controller-manager-multinode-720500
	e2d000674d525       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            1                   0461b752e7281       kube-scheduler-multinode-720500
	a76f9e773a2f6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   e2a9c5dc3b1b0       busybox-fc5497c4f-n2t5d
	68e49c3e6ddaa       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   1ac710138e878       coredns-7db6d8ff4d-c9wpc
	ab840a6a9856d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              27 minutes ago      Exited              kindnet-cni               0                   91df341636e89       kindnet-26s27
	3823f2e2bdb28       747097150317f                                                                                         27 minutes ago      Exited              kube-proxy                0                   45c98b77811e1       kube-proxy-64l9x
	63a6ebee2e836       25a1387cdab82                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   19b3080db261a       kube-controller-manager-multinode-720500
	ec3860b2bb3ef       a52dc94f0a912                                                                                         27 minutes ago      Exited              kube-scheduler            0                   73f8312902b01       kube-scheduler-multinode-720500
	
	
	==> coredns [68e49c3e6dda] <==
	[INFO] 10.244.0.3:57391 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000513s
	[INFO] 10.244.0.3:40338 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001263s
	[INFO] 10.244.0.3:45271 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001333s
	[INFO] 10.244.0.3:50324 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000215901s
	[INFO] 10.244.0.3:51522 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001987s
	[INFO] 10.244.0.3:39150 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001291s
	[INFO] 10.244.0.3:56081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001424s
	[INFO] 10.244.1.2:46468 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003026s
	[INFO] 10.244.1.2:57532 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130801s
	[INFO] 10.244.1.2:36166 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001469s
	[INFO] 10.244.1.2:58091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001725s
	[INFO] 10.244.0.3:52049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274601s
	[INFO] 10.244.0.3:51870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002814s
	[INFO] 10.244.0.3:51517 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001499s
	[INFO] 10.244.0.3:39242 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000636s
	[INFO] 10.244.1.2:34329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260201s
	[INFO] 10.244.1.2:47951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001521s
	[INFO] 10.244.1.2:52718 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0003583s
	[INFO] 10.244.1.2:45357 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001838s
	[INFO] 10.244.0.3:50865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001742s
	[INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001322s
	[INFO] 10.244.0.3:51977 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.0.3:47306 - 5 "PTR IN 1.144.22.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f9b260d61dfb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1df4b45205760a829d2b4efd62e6761cabaeb3e36537c3de4513b5f53ef6eb4f2b53c327cd39c823777bb78b5f7b2580d41c534fda1f52a64028d60b07b20d26
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44244 - 27530 "HINFO IN 6157212600695805867.8146164028617998750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029059168s
	
	
	==> describe nodes <==
	Name:               multinode-720500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-720500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-720500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T14_27_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 14:27:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-720500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:27:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:51:20 +0000   Mon, 03 Jun 2024 14:51:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.22.154.20
	  Hostname:    multinode-720500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1c31924319744c587cc3327e70686c4
	  System UUID:                ea941aa7-cd12-1640-be08-34f8de2baf60
	  Boot ID:                    81a28d6f-5e2f-4dbf-9879-01594b427fd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n2t5d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-c9wpc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-720500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-26s27                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-720500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-controller-manager-multinode-720500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-64l9x                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-720500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 27m                  kube-proxy       
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  Starting                 27m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)    kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)    kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                  node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	  Normal  NodeReady                26m                  kubelet          Node multinode-720500 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-720500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-720500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-720500 event: Registered Node multinode-720500 in Controller
	
	
	Name:               multinode-720500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-720500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-720500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T14_30_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 14:30:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	                    node.kubernetes.io/unschedulable:NoSchedule
	Unschedulable:      true
	Lease:
	  HolderIdentity:  multinode-720500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:47:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 14:46:48 +0000   Mon, 03 Jun 2024 14:48:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.22.146.196
	  Hostname:    multinode-720500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 235e819893284fd6a235e0cb3c7475f0
	  System UUID:                e57aaa06-73e1-b24d-bfac-b1ae5e512ff1
	  Boot ID:                    fe92bdd5-fbf4-4f1a-9684-a535d77de9c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mjhcf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-fmfz2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-sm9rr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-720500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-720500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-720500-m02 status is now: NodeReady
	  Normal  NodeNotReady             6m34s              node-controller  Node multinode-720500-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           3m49s              node-controller  Node multinode-720500-m02 event: Registered Node multinode-720500-m02 in Controller
	
	
	Name:               multinode-720500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-720500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-720500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T14_46_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 14:46:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-720500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:47:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 14:46:11 +0000   Mon, 03 Jun 2024 14:47:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.22.151.134
	  Hostname:    multinode-720500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3fc7859c5954f1297433aed117b91b8
	  System UUID:                e10deb53-3c27-6749-b4b3-758259579a7c
	  Boot ID:                    c5481ad8-4fd9-4085-86d3-6f705a8caf45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-h58hc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-ctm5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-720500-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m37s (x2 over 8m37s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x2 over 8m37s)  kubelet          Node multinode-720500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x2 over 8m37s)  kubelet          Node multinode-720500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m34s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	  Normal  NodeReady                8m30s                  kubelet          Node multinode-720500-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m54s                  node-controller  Node multinode-720500-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m49s                  node-controller  Node multinode-720500-m03 event: Registered Node multinode-720500-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.342920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.685939] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.735023] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Jun 3 14:49] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +50.878858] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.173829] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Jun 3 14:50] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.115993] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.526092] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[  +0.219569] systemd-fstab-generator[1032]: Ignoring "noauto" option for root device
	[  +0.239915] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +2.915659] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.214861] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.207351] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	[  +0.266530] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.876661] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.110633] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.640158] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +1.365325] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.844179] kauditd_printk_skb: 25 callbacks suppressed
	[  +3.106296] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +8.568344] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [480ef64cfa22] <==
	{"level":"info","ts":"2024-06-03T14:50:36.068652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T14:50:36.06872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T14:50:36.068733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T14:50:36.069034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff switched to configuration voters=(11939092234824790527)"}
	{"level":"info","ts":"2024-06-03T14:50:36.069111Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","added-peer-id":"a5b02d21ad5b31ff","added-peer-peer-urls":["https://172.22.150.195:2380"]}
	{"level":"info","ts":"2024-06-03T14:50:36.069286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6a80a2fe8578e5e6","local-member-id":"a5b02d21ad5b31ff","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:50:36.069633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T14:50:36.069793Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a5b02d21ad5b31ff","initial-advertise-peer-urls":["https://172.22.154.20:2380"],"listen-peer-urls":["https://172.22.154.20:2380"],"advertise-client-urls":["https://172.22.154.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.22.154.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T14:50:36.069837Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T14:50:36.069995Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.22.154.20:2380"}
	{"level":"info","ts":"2024-06-03T14:50:36.070008Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.22.154.20:2380"}
	{"level":"info","ts":"2024-06-03T14:50:37.714622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T14:50:37.715027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T14:50:37.71538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgPreVoteResp from a5b02d21ad5b31ff at term 2"}
	{"level":"info","ts":"2024-06-03T14:50:37.715714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T14:50:37.715867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff received MsgVoteResp from a5b02d21ad5b31ff at term 3"}
	{"level":"info","ts":"2024-06-03T14:50:37.716205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a5b02d21ad5b31ff became leader at term 3"}
	{"level":"info","ts":"2024-06-03T14:50:37.716405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a5b02d21ad5b31ff elected leader a5b02d21ad5b31ff at term 3"}
	{"level":"info","ts":"2024-06-03T14:50:37.724847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T14:50:37.724791Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a5b02d21ad5b31ff","local-member-attributes":"{Name:multinode-720500 ClientURLs:[https://172.22.154.20:2379]}","request-path":"/0/members/a5b02d21ad5b31ff/attributes","cluster-id":"6a80a2fe8578e5e6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T14:50:37.725564Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T14:50:37.726196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T14:50:37.726364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T14:50:37.729309Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T14:50:37.730855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.22.154.20:2379"}
	
	
	==> kernel <==
	 14:54:41 up 5 min,  0 users,  load average: 0.21, 0.33, 0.17
	Linux multinode-720500 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [008dec75d90c] <==
	I0603 14:53:52.855942       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:54:02.866786       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:54:02.866877       1 main.go:227] handling current node
	I0603 14:54:02.866894       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:54:02.866903       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:54:02.867048       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:54:02.867058       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:54:12.875091       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:54:12.875146       1 main.go:227] handling current node
	I0603 14:54:12.876923       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:54:12.877016       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:54:12.877470       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:54:12.877492       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:54:22.892540       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:54:22.892657       1 main.go:227] handling current node
	I0603 14:54:22.892674       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:54:22.892682       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:54:22.893024       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:54:22.893203       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:54:32.901950       1 main.go:223] Handling node with IPs: map[172.22.154.20:{}]
	I0603 14:54:32.902072       1 main.go:227] handling current node
	I0603 14:54:32.902090       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:54:32.902098       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:54:32.902819       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:54:32.902926       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ab840a6a9856] <==
	I0603 14:47:23.306196       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:47:33.320017       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:47:33.320267       1 main.go:227] handling current node
	I0603 14:47:33.320364       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:47:33.320399       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:47:33.320800       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:47:33.320833       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:47:43.329989       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:47:43.330122       1 main.go:227] handling current node
	I0603 14:47:43.330326       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:47:43.330486       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:47:43.331007       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:47:43.331092       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:47:53.346870       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:47:53.347021       1 main.go:227] handling current node
	I0603 14:47:53.347035       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:47:53.347043       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:47:53.347400       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:47:53.347581       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	I0603 14:48:03.360705       1 main.go:223] Handling node with IPs: map[172.22.150.195:{}]
	I0603 14:48:03.360878       1 main.go:227] handling current node
	I0603 14:48:03.360896       1 main.go:223] Handling node with IPs: map[172.22.146.196:{}]
	I0603 14:48:03.360904       1 main.go:250] Node multinode-720500-m02 has CIDR [10.244.1.0/24] 
	I0603 14:48:03.361256       1 main.go:223] Handling node with IPs: map[172.22.151.134:{}]
	I0603 14:48:03.361334       1 main.go:250] Node multinode-720500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [885576ffcadd] <==
	I0603 14:50:39.410099       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 14:50:39.413505       1 aggregator.go:165] initial CRD sync complete...
	I0603 14:50:39.413538       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 14:50:39.413547       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 14:50:39.450903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 14:50:39.462513       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 14:50:39.464182       1 policy_source.go:224] refreshing policies
	I0603 14:50:39.465876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 14:50:39.466992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 14:50:39.468755       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 14:50:39.469769       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 14:50:39.474781       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 14:50:39.486280       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 14:50:39.486306       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 14:50:39.514217       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 14:50:39.514539       1 cache.go:39] Caches are synced for autoregister controller
	I0603 14:50:40.271657       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 14:50:40.806504       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.22.154.20]
	I0603 14:50:40.811756       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 14:50:40.836037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 14:50:42.134633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 14:50:42.350516       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 14:50:42.378696       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 14:50:42.521546       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 14:50:42.533218       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [63a6ebee2e83] <==
	I0603 14:30:30.530460       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m02\" does not exist"
	I0603 14:30:30.563054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m02" podCIDRs=["10.244.1.0/24"]
	I0603 14:30:31.846889       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m02"
	I0603 14:30:49.741096       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:31:16.611365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.145667ms"
	I0603 14:31:16.634251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.843998ms"
	I0603 14:31:16.634722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.103µs"
	I0603 14:31:16.635057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.4µs"
	I0603 14:31:16.670503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.001µs"
	I0603 14:31:19.698737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.129108ms"
	I0603 14:31:19.698833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.8µs"
	I0603 14:31:20.055879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.87041ms"
	I0603 14:31:20.057158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.2µs"
	I0603 14:35:14.351135       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:35:14.351827       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:35:14.376803       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.2.0/24"]
	I0603 14:35:16.927010       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-720500-m03"
	I0603 14:35:33.157459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:43:17.065455       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:45:58.451014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:46:04.988996       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:46:04.989982       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-720500-m03\" does not exist"
	I0603 14:46:05.046032       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-720500-m03" podCIDRs=["10.244.3.0/24"]
	I0603 14:46:11.957254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	I0603 14:47:47.196592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-720500-m02"
	
	
	==> kube-controller-manager [f14b3b67d8f2] <==
	I0603 14:50:52.309899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.433483ms"
	I0603 14:50:52.310618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.9µs"
	I0603 14:50:52.311874       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 14:50:52.315773       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 14:50:52.322625       1 shared_informer.go:320] Caches are synced for job
	I0603 14:50:52.328121       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 14:50:52.345391       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 14:50:52.415295       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0603 14:50:52.416018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 14:50:52.421610       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 14:50:52.453966       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:50:52.465679       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 14:50:52.907461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:50:52.937479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 14:50:52.937578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 14:51:22.286800       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0603 14:51:45.740640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.050345ms"
	I0603 14:51:45.740735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.201µs"
	I0603 14:51:45.758728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.201µs"
	I0603 14:51:45.833756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.845189ms"
	I0603 14:51:45.833914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.301µs"
	I0603 14:54:04.336941       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.755954ms"
	I0603 14:54:04.352865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.575997ms"
	I0603 14:54:04.374697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.762236ms"
	I0603 14:54:04.374771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.7µs"
	
	
	==> kube-proxy [3823f2e2bdb2] <==
	I0603 14:27:34.209759       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:27:34.223354       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.150.195"]
	I0603 14:27:34.293018       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:27:34.293146       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:27:34.293240       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:27:34.299545       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:27:34.300745       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:27:34.300860       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:27:34.304329       1 config.go:192] "Starting service config controller"
	I0603 14:27:34.304371       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:27:34.304437       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:27:34.304447       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:27:34.308322       1 config.go:319] "Starting node config controller"
	I0603 14:27:34.308362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:27:34.405130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 14:27:34.409156       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [42926c33070c] <==
	I0603 14:50:42.069219       1 server_linux.go:69] "Using iptables proxy"
	I0603 14:50:42.114052       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.22.154.20"]
	I0603 14:50:42.256500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 14:50:42.256559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 14:50:42.256598       1 server_linux.go:165] "Using iptables Proxier"
	I0603 14:50:42.262735       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 14:50:42.263687       1 server.go:872] "Version info" version="v1.30.1"
	I0603 14:50:42.263771       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:50:42.271889       1 config.go:192] "Starting service config controller"
	I0603 14:50:42.273191       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 14:50:42.273658       1 config.go:319] "Starting node config controller"
	I0603 14:50:42.273675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 14:50:42.275244       1 config.go:101] "Starting endpoint slice config controller"
	I0603 14:50:42.279063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 14:50:42.373930       1 shared_informer.go:320] Caches are synced for node config
	I0603 14:50:42.373994       1 shared_informer.go:320] Caches are synced for service config
	I0603 14:50:42.379201       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e2d000674d52] <==
	I0603 14:50:36.598072       1 serving.go:380] Generated self-signed cert in-memory
	W0603 14:50:39.337367       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 14:50:39.337481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 14:50:39.337517       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 14:50:39.337620       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 14:50:39.434477       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 14:50:39.434769       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 14:50:39.439758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 14:50:39.442615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 14:50:39.442644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 14:50:39.443721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 14:50:39.542876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec3860b2bb3e] <==
	E0603 14:27:16.294495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.364252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 14:27:16.364604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 14:27:16.422522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 14:27:16.422581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 14:27:16.468112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 14:27:16.468324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.510809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 14:27:16.511288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 14:27:16.596260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 14:27:16.596369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 14:27:16.607837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 14:27:16.608073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 14:27:16.665087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 14:27:16.666440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 14:27:16.711247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 14:27:16.711594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 14:27:16.716923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 14:27:16.716968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 14:27:16.731690       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 14:27:16.732816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 14:27:16.743716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 14:27:16.743766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0603 14:27:18.441261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 14:48:07.717597       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 14:51:25 multinode-720500 kubelet[1525]: I0603 14:51:25.959992    1525 scope.go:117] "RemoveContainer" containerID="2061be0913b2b7bbeb8910640a3eb64b2687806840f98e8fafa8046e641af566"
	Jun 03 14:51:33 multinode-720500 kubelet[1525]: E0603 14:51:33.993879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:51:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:51:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:51:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:51:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.037024    1525 scope.go:117] "RemoveContainer" containerID="dcd798ff8a4661302e83f6f11f14422de529b0502fcd6143a4a29a3f45757a8a"
	Jun 03 14:51:34 multinode-720500 kubelet[1525]: I0603 14:51:34.091663    1525 scope.go:117] "RemoveContainer" containerID="5185046feae6a986658119ffc29d3a23423e83dba5ada983e73072c57ee6ad2d"
	Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.627773    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526c48b9021d624761c10f5fc02f8bf24cfa0fba9cedb8c4ffc7ba1e1b873891"
	Jun 03 14:51:44 multinode-720500 kubelet[1525]: I0603 14:51:44.667520    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc15670757342f66009ba040d6ba949bcf31fd55a784268a563387298e19eb7"
	Jun 03 14:52:33 multinode-720500 kubelet[1525]: E0603 14:52:33.992879    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:52:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:52:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:52:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:52:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:53:33 multinode-720500 kubelet[1525]: E0603 14:53:33.994014    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:53:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:53:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:53:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:53:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:54:33 multinode-720500 kubelet[1525]: E0603 14:54:33.997541    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:54:33 multinode-720500 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:54:33 multinode-720500 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:54:33 multinode-720500 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:54:33 multinode-720500 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:54:30.338562    6100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-720500 -n multinode-720500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-720500 -n multinode-720500: (12.3402917s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-720500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-s7qhm
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-720500 describe pod busybox-fc5497c4f-s7qhm
helpers_test.go:282: (dbg) kubectl --context multinode-720500 describe pod busybox-fc5497c4f-s7qhm:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-s7qhm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b5phr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b5phr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  57s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (491.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-528900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-528900 --driver=hyperv: exit status 1 (4m59.6852183s)

                                                
                                                
-- stdout --
	* [NoKubernetes-528900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-528900" primary control-plane node in "NoKubernetes-528900" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 15:11:31.758097    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-528900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-528900 -n NoKubernetes-528900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-528900 -n NoKubernetes-528900: exit status 7 (3.7892759s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 15:16:31.459740     796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0603 15:16:35.076335     796 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-528900".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-528900 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-528900:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-528900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.48s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (10800.473s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-767100 --alsologtostderr -v=5
panic: test timed out after 3h0m0s
running tests:
	TestKubernetesUpgrade (5m27s)
	TestPause (10m35s)
	TestPause/serial (10m35s)
	TestPause/serial/DeletePaused (24s)
	TestRunningBinaryUpgrade (10m35s)
	TestStartStop (10m35s)
	TestStoppedBinaryUpgrade (3m27s)
	TestStoppedBinaryUpgrade/Upgrade (3m25s)

                                                
                                                
goroutine 2235 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000163040, 0xc00091bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000126720, {0x4bb1f80, 0x2a, 0x2a}, {0x27e6567?, 0x62806f?, 0x4bd5240?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000733d60)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000733d60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 14 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00014fb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1140 [chan send, 137 minutes]:
os/exec.(*Cmd).watchCtx(0xc000904000, 0xc0018887e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1139
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2195 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015804e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015804e0, 0xc0006b5900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 178 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3810900, 0xc0000541e0}, 0xc000851f50, 0xc000851f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3810900, 0xc0000541e0}, 0x90?, 0xc000851f50, 0xc000851f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3810900?, 0xc0000541e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000851fd0?, 0x6fe404?, 0xc0007e0420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1011 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1010
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1356 [chan send, 129 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015082c0, 0xc0018889c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 953
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 87 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 27
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 853 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x27363355470, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000100408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0017056a0, 0xc002075bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc001705688, 0x3c4, {0xc0007e25a0?, 0x0?, 0x0?}, 0xc000100008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc001705688, 0xc002075d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc001705688)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0000aa820)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0000aa820)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004bc0f0, {0x38039a0, 0xc0000aa820})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0004bc0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0008ff860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 850
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2199 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001581520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001581520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001581520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001581520, 0xc0006b5e80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2230 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc001456000, 0xc00090e2a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2227
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 153 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000711640, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 171
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2257 [syscall, locked to thread]:
syscall.SyscallN(0x7ffc53764de0?, {0xc0009e3ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6d8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00088c6c0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001456160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001456160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000160680, 0xc001456160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateDelete({0x3810740?, 0xc000484000?}, 0xc000160680, {0xc001dc8420?, 0xc015a4d840?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:132 +0x16f
k8s.io/minikube/test/integration.TestPause.func1.1(0xc000160680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc000160680, 0xc0005de040)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2200
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 152 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0008ace40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 171
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 113 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000711610, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x227f6e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0008acd20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000711640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001fcb70, {0x37ece20, 0xc0005b79b0}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001fcb70, 0x3b9aca00, 0x0, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 735 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008feb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0008feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0008feb60, 0x32965c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2229 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc00140bb20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x27ad641?, 0xc00140bb80?, 0x57fdd6?, 0x4c626a0?, 0xc00140bc08?, 0x57281b?, 0x568ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x488, {0xc000447600?, 0x200, 0xc000447600?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00145c008?, {0xc000447600?, 0x5ac1be?, 0x200?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00145c008, {0xc000447600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a67d0, {0xc000447600?, 0xc00140bd98?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014fc900, {0x37eb9e0, 0xc0009a8c10})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0014fc900}, {0x37eb9e0, 0xc0009a8c10}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc0014fc900})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x570c36?, {0x37ebb20?, 0xc0014fc900?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0014fc900}, {0x37ebaa0, 0xc0000a67d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0007e04e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2227
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2226 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000904580, 0xc001888300)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2162
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 993 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000711990, 0x32)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x227f6e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00194c8a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000711d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f3ae70, {0x37ece20, 0xc001f7d2c0}, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f3ae70, 0x3b9aca00, 0x0, 0x1, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 994
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 179 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 737 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008feea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0008feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0008feea0, 0x32965d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2258 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0006c3080, 0xc0007e00c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2160
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2227 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffc53764de0?, {0xc00084f6a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x71c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000797aa0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001456000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001456000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc001581860, 0xc001456000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x385
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc00084fc20?, {0x37f9298, 0xc0000aa200}, 0x32978a0, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x37f9298?, 0xc0000aa200?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc001689e28, 0x3b9aca00, 0x1a3185c5000, {0xc001689d08?, 0x227f6e0?, 0x5bf288?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc001581860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2de
testing.tRunner(0xc001581860, 0xc000710080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2161
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2161 [chan receive, 4 minutes]:
testing.(*T).Run(0xc000c66340, {0x278e62a?, 0x3005753e800?}, 0xc000710080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000c66340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc000c66340, 0x32966f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 736 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008fed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0008fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0008fed00, 0x32965b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2198 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001581380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001581380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001581380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001581380, 0xc0006b5e00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2228 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0xc0000aa200?, {0xc001689b20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc001689b80?, 0x57fdd6?, 0x4c626a0?, 0xc001689c08?, 0x572985?, 0x2735db90eb8?, 0x41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x30c, {0xc000125200?, 0x200, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001554788?, {0xc000125200?, 0x136882f?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001554788, {0xc000125200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6340, {0xc000125200?, 0xc0015a4000?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014fc8d0, {0x37eb9e0, 0xc000692148})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0014fc8d0}, {0x37eb9e0, 0xc000692148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc0014fc8d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x0?, {0x37ebb20?, 0xc0014fc8d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0014fc8d0}, {0x37ebaa0, 0xc0000a6340}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000710080?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2227
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1010 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3810900, 0xc0000541e0}, 0xc0014abf50, 0xc0014abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3810900, 0xc0000541e0}, 0xa0?, 0xc0014abf50, 0xc0014abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3810900?, 0xc0000541e0?}, 0x0?, 0xb065e8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014abfd0?, 0x6fe404?, 0xc00011bad0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 994
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2197 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015811e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015811e0, 0xc0006b5d40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2196 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001580680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001580680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001580680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001580680, 0xc0006b5940)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2113 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001580000, 0x32968c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2158
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2158 [chan receive, 12 minutes]:
testing.(*T).Run(0xc000161ba0, {0x278a679?, 0x6b7333?}, 0x32968c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000161ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000161ba0, 0x32966e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2192 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc0009d1b58?, {0xc0009d1b20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0016ce06e?, 0xc0009d1b80?, 0x57fdd6?, 0x4c626a0?, 0xc0009d1c08?, 0x572985?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4c4, {0xc00025b26f?, 0x591, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001608508?, {0xc00025b26f?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001608508, {0xc00025b26f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009a8480, {0xc00025b26f?, 0x27362ff19a8?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009d4090, {0x37eb9e0, 0xc00011c0c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0009d4090}, {0x37eb9e0, 0xc00011c0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc0009d4090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x570c36?, {0x37ebb20?, 0xc0009d4090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0009d4090}, {0x37ebaa0, 0xc0009a8480}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0007e0300?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2160
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2209 [syscall, locked to thread]:
syscall.SyscallN(0x273635442d8?, {0xc001b27b20?, 0x587ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x273635442d8?, 0xc001b27b80?, 0x57fdd6?, 0x4c626a0?, 0xc001b27c08?, 0x572985?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x768, {0xc00141f7b9?, 0x4847, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000902788?, {0xc00141f7b9?, 0x5ac1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000902788, {0xc00141f7b9, 0x4847, 0x4847})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ae650, {0xc00141f7b9?, 0xb15?, 0x7f94?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0005b7440, {0x37eb9e0, 0xc0009a89c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0005b7440}, {0x37eb9e0, 0xc0009a89c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc0005b7440})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x570c36?, {0x37ebb20?, 0xc0005b7440?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0005b7440}, {0x37ebaa0, 0xc0006ae650}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00098d860?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2162
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 771 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008ff1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008ff1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0008ff1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc0008ff1e0, 0x32965f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 897 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00194c9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 770 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008ff040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008ff040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0008ff040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc0008ff040, 0x3296600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2193 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc000cfe8c0?, {0xc001d89b20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc001d89b80?, 0x57fdd6?, 0x4c626a0?, 0xc001d89c08?, 0x572985?, 0x2735db90598?, 0x5720657572743a77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x584, {0xc0009d8210?, 0x1df0, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001608c88?, {0xc0009d8210?, 0x5ac1be?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001608c88, {0xc0009d8210, 0x1df0, 0x1df0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009a8a08, {0xc0009d8210?, 0xc001d89d98?, 0x1e38?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009d40c0, {0x37eb9e0, 0xc000692080})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0009d40c0}, {0x37eb9e0, 0xc000692080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc0009d40c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x570c36?, {0x37ebb20?, 0xc0009d40c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0009d40c0}, {0x37ebaa0, 0xc0009a8a08}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001f26480?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2160
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 994 [chan receive, 139 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000711d40, 0xc0000541e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2274 [syscall, locked to thread]:
syscall.SyscallN(0xc0006b0790?, {0xc0015c1b20?, 0x37df5f0?, 0x1?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0015c1bf0?, 0x6fc457?, 0xc000112219?, 0x1e?, 0xc0015c1c08?, 0x57281b?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6dc, {0xc00087b44e?, 0x3b2, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001da6288?, {0xc00087b44e?, 0x20e13ff?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001da6288, {0xc00087b44e, 0x3b2, 0x3b2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000692140, {0xc00087b44e?, 0xc000cff180?, 0x24?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000826c30, {0x37eb9e0, 0xc0006ae698})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc000826c30}, {0x37eb9e0, 0xc0006ae698}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015c1e78?, {0x37ebb20, 0xc000826c30})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0015c1f38?, {0x37ebb20?, 0xc000826c30?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc000826c30}, {0x37ebaa0, 0xc000692140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0018881e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2257
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2160 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffc53764de0?, {0xc00006b960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4fc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00096c7e0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0006c3080)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0006c3080)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000c661a0, 0xc0006c3080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc000c661a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc000c661a0, 0x32966c8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2275 [syscall, locked to thread]:
syscall.SyscallN(0xc0009a9f80?, {0xc001407b20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001407b67?, 0xc001407b80?, 0x57fdd6?, 0x4c626a0?, 0xc001407c08?, 0x572985?, 0x2735db90a28?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x688, {0xc00142e72f?, 0x18d1, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001da6788?, {0xc00142e72f?, 0x5ac1be?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001da6788, {0xc00142e72f, 0x18d1, 0x18d1})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006921b0, {0xc00142e72f?, 0xc001407d98?, 0x1e48?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000826cf0, {0x37eb9e0, 0xc0009a8b70})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc000826cf0}, {0x37eb9e0, 0xc0009a8b70}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x37ebb20, 0xc000826cf0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x570c36?, {0x37ebb20?, 0xc000826cf0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc000826cf0}, {0x37ebaa0, 0xc0006921b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001db8020?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2257
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2200 [chan receive]:
testing.(*T).Run(0xc0015816c0, {0x27996f8?, 0x24?}, 0xc0005de040)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0015816c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0015816c0, 0xc000c6b260)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2108
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2276 [select]:
os/exec.(*Cmd).watchCtx(0xc001456160, 0xc00090e420)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2257
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2208 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x5e8b6a?, {0xc00085db20?, 0x587ea5?, 0x4c626a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00088a04d?, 0xc00085db80?, 0x57fdd6?, 0x4c626a0?, 0xc00085dc08?, 0x572985?, 0x2735db90108?, 0xc001d9a34d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x734, {0xc00025a24f?, 0x5b1, 0x62417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00145ca08?, {0xc00025a24f?, 0xc00085dc50?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00145ca08, {0xc00025a24f, 0x5b1, 0x5b1})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ae610, {0xc00025a24f?, 0x0?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0005b7410, {0x37eb9e0, 0xc0006920a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x37ebb20, 0xc0005b7410}, {0x37eb9e0, 0xc0006920a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x2788730?, {0x37ebb20, 0xc0005b7410})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x6b83a0?, {0x37ebb20?, 0xc0005b7410?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x37ebb20, 0xc0005b7410}, {0x37ebaa0, 0xc0006ae610}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x32966a8?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2162
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2194 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015801a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015801a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015801a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015801a0, 0xc0006b4bc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2106 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0007b1630)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000160820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000160820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000160820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc000160820, 0x32966a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2162 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffc53764de0?, {0xc0009b7798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x708, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001c91770)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000904580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000904580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000c664e0, 0xc000904580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc000c664e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:222 +0x375
testing.tRunner(0xc000c664e0, 0x3296668)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2108 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0001616c0, {0x278bb8c?, 0xd18c2e2800?}, 0xc000c6b260)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc0001616c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0001616c0, 0x32966b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (124/190)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.73
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 1.36
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.34
12 TestDownloadOnly/v1.30.1/json-events 11.53
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.45
18 TestDownloadOnly/v1.30.1/DeleteAll 1.38
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.34
21 TestBinaryMirror 7.21
22 TestOffline 428.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.27
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
27 TestAddons/Setup 442.69
30 TestAddons/parallel/Ingress 65.4
31 TestAddons/parallel/InspektorGadget 26.67
32 TestAddons/parallel/MetricsServer 21.7
33 TestAddons/parallel/HelmTiller 29.64
35 TestAddons/parallel/CSI 80.29
36 TestAddons/parallel/Headlamp 36.42
37 TestAddons/parallel/CloudSpanner 22.5
38 TestAddons/parallel/LocalPath 53.92
39 TestAddons/parallel/NvidiaDevicePlugin 22.3
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 77.68
44 TestAddons/serial/GCPAuth/Namespaces 0.35
45 TestAddons/StoppedEnableDisable 54.54
57 TestErrorSpam/start 17.43
58 TestErrorSpam/status 37.05
59 TestErrorSpam/pause 23.06
60 TestErrorSpam/unpause 23.34
61 TestErrorSpam/stop 57
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 208.08
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 128.75
68 TestFunctional/serial/KubeContext 0.13
69 TestFunctional/serial/KubectlGetPods 0.21
72 TestFunctional/serial/CacheCmd/cache/add_remote 26.43
73 TestFunctional/serial/CacheCmd/cache/add_local 11.04
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
75 TestFunctional/serial/CacheCmd/cache/list 0.24
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.47
77 TestFunctional/serial/CacheCmd/cache/cache_reload 36.52
78 TestFunctional/serial/CacheCmd/cache/delete 0.5
79 TestFunctional/serial/MinikubeKubectlCmd 0.51
83 TestFunctional/serial/LogsCmd 168.62
84 TestFunctional/serial/LogsFileCmd 241.02
96 TestFunctional/parallel/AddonsCmd 0.76
99 TestFunctional/parallel/SSHCmd 18.91
100 TestFunctional/parallel/CpCmd 54.35
102 TestFunctional/parallel/FileSync 9.5
103 TestFunctional/parallel/CertSync 58.54
109 TestFunctional/parallel/NonActiveRuntimeDisabled 9.32
111 TestFunctional/parallel/License 3.04
118 TestFunctional/parallel/Version/short 0.24
119 TestFunctional/parallel/Version/components 7.81
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
135 TestFunctional/parallel/ImageCommands/Setup 3.78
140 TestFunctional/parallel/ImageCommands/ImageRemove 120.62
141 TestFunctional/parallel/ProfileCmd/profile_not_create 10.23
142 TestFunctional/parallel/ProfileCmd/profile_list 10.22
143 TestFunctional/parallel/ProfileCmd/profile_json_output 10.18
145 TestFunctional/parallel/UpdateContextCmd/no_changes 2.45
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.49
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.48
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60
150 TestFunctional/delete_addon-resizer_images 0.02
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 702.07
157 TestMultiControlPlane/serial/DeployApp 13.43
159 TestMultiControlPlane/serial/AddWorkerNode 279.19
160 TestMultiControlPlane/serial/NodeLabels 0.18
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.31
162 TestMultiControlPlane/serial/CopyFile 647.59
166 TestImageBuild/serial/Setup 198.27
167 TestImageBuild/serial/NormalBuild 9.68
168 TestImageBuild/serial/BuildWithBuildArg 9.06
169 TestImageBuild/serial/BuildWithDockerIgnore 7.96
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.66
174 TestJSONOutput/start/Command 242.61
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 7.93
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 7.9
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 35.36
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.51
202 TestMainNoArgs 0.26
203 TestMinikubeProfile 529.22
206 TestMountStart/serial/StartWithMountFirst 156.49
207 TestMountStart/serial/VerifyMountFirst 9.68
208 TestMountStart/serial/StartWithMountSecond 156.86
209 TestMountStart/serial/VerifyMountSecond 9.58
210 TestMountStart/serial/DeleteFirst 28.18
211 TestMountStart/serial/VerifyMountPostDelete 9.56
212 TestMountStart/serial/Stop 30.86
213 TestMountStart/serial/RestartStopped 119.28
214 TestMountStart/serial/VerifyMountPostStop 9.77
217 TestMultiNode/serial/FreshStart2Nodes 424.89
218 TestMultiNode/serial/DeployApp2Nodes 9.67
220 TestMultiNode/serial/AddNode 228.4
221 TestMultiNode/serial/MultiNodeLabels 0.19
222 TestMultiNode/serial/ProfileList 9.97
223 TestMultiNode/serial/CopyFile 366.09
224 TestMultiNode/serial/StopNode 77.22
225 TestMultiNode/serial/StartAfterStop 185.44
230 TestPreload 527.38
231 TestScheduledStopWindows 330.19
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
x
+
TestDownloadOnly/v1.20.0/json-events (17.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-687900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-687900 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.7305044s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-687900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-687900: exit status 85 (291.2718ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |          |
	|         | -p download-only-687900        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:22:06
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:22:06.663756   10004 out.go:291] Setting OutFile to fd 616 ...
	I0603 12:22:06.664704   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:22:06.664704   10004 out.go:304] Setting ErrFile to fd 620...
	I0603 12:22:06.664748   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:22:06.678366   10004 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0603 12:22:06.691843   10004 out.go:298] Setting JSON to true
	I0603 12:22:06.694782   10004 start.go:129] hostinfo: {"hostname":"minikube3","uptime":18255,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:22:06.695797   10004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:22:06.701518   10004 out.go:97] [download-only-687900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	W0603 12:22:06.701518   10004 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0603 12:22:06.701518   10004 notify.go:220] Checking for updates...
	I0603 12:22:06.704896   10004 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:22:06.708015   10004 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:22:06.711228   10004 out.go:169] MINIKUBE_LOCATION=19011
	I0603 12:22:06.714811   10004 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0603 12:22:06.719804   10004 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 12:22:06.721420   10004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:22:12.065669   10004 out.go:97] Using the hyperv driver based on user configuration
	I0603 12:22:12.065740   10004 start.go:297] selected driver: hyperv
	I0603 12:22:12.065740   10004 start.go:901] validating driver "hyperv" against <nil>
	I0603 12:22:12.065740   10004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:22:12.118329   10004 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0603 12:22:12.119428   10004 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 12:22:12.119428   10004 cni.go:84] Creating CNI manager for ""
	I0603 12:22:12.119428   10004 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0603 12:22:12.119428   10004 start.go:340] cluster config:
	{Name:download-only-687900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-687900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:22:12.120981   10004 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:22:12.123923   10004 out.go:97] Downloading VM boot image ...
	I0603 12:22:12.123923   10004 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:22:16.411841   10004 out.go:97] Starting "download-only-687900" primary control-plane node in "download-only-687900" cluster
	I0603 12:22:16.411951   10004 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 12:22:16.456624   10004 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0603 12:22:16.456624   10004 cache.go:56] Caching tarball of preloaded images
	I0603 12:22:16.457558   10004 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 12:22:16.460706   10004 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0603 12:22:16.460706   10004 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 12:22:16.533188   10004 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0603 12:22:19.893047   10004 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 12:22:19.937919   10004 preload.go:255] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0603 12:22:20.910640   10004 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0603 12:22:20.910985   10004 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-687900\config.json ...
	I0603 12:22:20.911577   10004 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-687900\config.json: {Name:mk44fc34a3d9e44656c40b1e9407aef6e74bfce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:22:20.913006   10004 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0603 12:22:20.914556   10004 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-687900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-687900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:22:24.379539    5524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3576196s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-687900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-687900: (1.3444685s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (11.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-633500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-633500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (11.5319693s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (11.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-633500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-633500: exit status 85 (446.9998ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | -p download-only-687900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| delete  | -p download-only-687900        | download-only-687900 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC | 03 Jun 24 12:22 UTC |
	| start   | -o=json --download-only        | download-only-633500 | minikube3\jenkins | v1.33.1 | 03 Jun 24 12:22 UTC |                     |
	|         | -p download-only-633500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:22:27
	Running on machine: minikube3
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:22:27.450267    1336 out.go:291] Setting OutFile to fd 788 ...
	I0603 12:22:27.451047    1336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:22:27.451047    1336 out.go:304] Setting ErrFile to fd 792...
	I0603 12:22:27.451574    1336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:22:27.476360    1336 out.go:298] Setting JSON to true
	I0603 12:22:27.478716    1336 start.go:129] hostinfo: {"hostname":"minikube3","uptime":18275,"bootTime":1717399071,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 12:22:27.479693    1336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 12:22:27.484644    1336 out.go:97] [download-only-633500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 12:22:27.487321    1336 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 12:22:27.485683    1336 notify.go:220] Checking for updates...
	I0603 12:22:27.493451    1336 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 12:22:27.496312    1336 out.go:169] MINIKUBE_LOCATION=19011
	I0603 12:22:27.499174    1336 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0603 12:22:27.504659    1336 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 12:22:27.504986    1336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:22:32.937895    1336 out.go:97] Using the hyperv driver based on user configuration
	I0603 12:22:32.937895    1336 start.go:297] selected driver: hyperv
	I0603 12:22:32.937895    1336 start.go:901] validating driver "hyperv" against <nil>
	I0603 12:22:32.937895    1336 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:22:32.989072    1336 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0603 12:22:32.989796    1336 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 12:22:32.989796    1336 cni.go:84] Creating CNI manager for ""
	I0603 12:22:32.989796    1336 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0603 12:22:32.989796    1336 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:22:32.989796    1336 start.go:340] cluster config:
	{Name:download-only-633500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-633500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:22:32.990747    1336 iso.go:125] acquiring lock: {Name:mk8dfcd3d0dcd7e12c52bc190d225d6686e354f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:22:32.994013    1336 out.go:97] Starting "download-only-633500" primary control-plane node in "download-only-633500" cluster
	I0603 12:22:32.994013    1336 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:22:33.035820    1336 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:22:33.035820    1336 cache.go:56] Caching tarball of preloaded images
	I0603 12:22:33.036186    1336 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0603 12:22:33.038930    1336 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0603 12:22:33.038930    1336 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0603 12:22:33.113772    1336 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0603 12:22:36.981915    1336 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0603 12:22:36.982932    1336 preload.go:255] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-633500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:22:38.903507   11328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3751691s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-633500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-633500: (1.3402559s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.34s)

                                                
                                    
x
+
TestBinaryMirror (7.21s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-022300 --alsologtostderr --binary-mirror http://127.0.0.1:60183 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-022300 --alsologtostderr --binary-mirror http://127.0.0.1:60183 --driver=hyperv: (6.3227223s)
helpers_test.go:175: Cleaning up "binary-mirror-022300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-022300
--- PASS: TestBinaryMirror (7.21s)

                                                
                                    
x
+
TestOffline (428.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-528900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-528900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m27.2705179s)
helpers_test.go:175: Cleaning up "offline-docker-528900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-528900
E0603 15:18:37.392025   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-528900: (41.4179497s)
--- PASS: TestOffline (428.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-975100
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-975100: exit status 85 (271.4171ms)

                                                
                                                
-- stdout --
	* Profile "addons-975100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-975100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:22:51.838945    5416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-975100
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-975100: exit status 85 (277.6066ms)

                                                
                                                
-- stdout --
	* Profile "addons-975100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-975100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:22:51.837904   11220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (442.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-975100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-975100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m22.6886102s)
--- PASS: TestAddons/Setup (442.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (65.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-975100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-975100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-975100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [94ade4f2-25b4-499d-a864-9851004bb1d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [94ade4f2-25b4-499d-a864-9851004bb1d5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0176177s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.592071s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-975100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0603 12:32:02.570936    3972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-975100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 ip: (2.5825312s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.22.146.54
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable ingress-dns --alsologtostderr -v=1: (15.751782s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable ingress --alsologtostderr -v=1: (22.3590239s)
--- PASS: TestAddons/parallel/Ingress (65.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cgfvw" [17eb3a37-438b-4597-a59d-f7ec22dc0347] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017185s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-975100
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-975100: (21.6523744s)
--- PASS: TestAddons/parallel/InspektorGadget (26.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.9865ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jhc6h" [4187f9ec-f978-4643-838b-d3875d916087] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0213458s
addons_test.go:417: (dbg) Run:  kubectl --context addons-975100 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable metrics-server --alsologtostderr -v=1: (16.4295324s)
--- PASS: TestAddons/parallel/MetricsServer (21.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (29.64s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 7.6325ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-zvpk6" [6a235ac0-d0ab-42fb-9971-c2242086334b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.1323625s
addons_test.go:475: (dbg) Run:  kubectl --context addons-975100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-975100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.4565298s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable helm-tiller --alsologtostderr -v=1: (16.0140388s)
--- PASS: TestAddons/parallel/HelmTiller (29.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (80.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 11.9912ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-975100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-975100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [63ada9b0-75f8-4279-8a47-732aa3a3bc4a] Pending
helpers_test.go:344: "task-pv-pod" [63ada9b0-75f8-4279-8a47-732aa3a3bc4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [63ada9b0-75f8-4279-8a47-732aa3a3bc4a] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0071781s
addons_test.go:586: (dbg) Run:  kubectl --context addons-975100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-975100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-975100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-975100 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-975100 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-975100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-975100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5a7e0421-b12b-4dbf-a4f8-cedfbcc6e677] Pending
helpers_test.go:344: "task-pv-pod-restore" [5a7e0421-b12b-4dbf-a4f8-cedfbcc6e677] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5a7e0421-b12b-4dbf-a4f8-cedfbcc6e677] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.021115s
addons_test.go:628: (dbg) Run:  kubectl --context addons-975100 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-975100 delete pod task-pv-pod-restore: (1.1419267s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-975100 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-975100 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.0635225s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable volumesnapshots --alsologtostderr -v=1: (15.2014342s)
--- PASS: TestAddons/parallel/CSI (80.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-975100 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-975100 --alsologtostderr -v=1: (16.9593249s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-dr562" [68e4c5e5-0398-45bf-a7d2-b2003100d101] Pending
helpers_test.go:344: "headlamp-68456f997b-dr562" [68e4c5e5-0398-45bf-a7d2-b2003100d101] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-dr562" [68e4c5e5-0398-45bf-a7d2-b2003100d101] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.4543049s
--- PASS: TestAddons/parallel/Headlamp (36.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-w9sb8" [86ccbf23-5d12-460e-8921-3d31a46584f1] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0129475s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-975100
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-975100: (16.4650677s)
--- PASS: TestAddons/parallel/CloudSpanner (22.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-975100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-975100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b90d022f-c2be-463b-aa0d-efdd95accf03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b90d022f-c2be-463b-aa0d-efdd95accf03] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b90d022f-c2be-463b-aa0d-efdd95accf03] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0218206s
addons_test.go:992: (dbg) Run:  kubectl --context addons-975100 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 ssh "cat /opt/local-path-provisioner/pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 ssh "cat /opt/local-path-provisioner/pvc-3643ecd0-e12d-4061-aa02-dd2d9e130755_default_test-pvc/file1": (10.665694s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-975100 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-975100 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.018844s)
--- PASS: TestAddons/parallel/LocalPath (53.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7kz8w" [8712d628-4348-427e-9373-ce7d8f1b2e9b] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0196935s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-975100
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-975100: (16.2685055s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-c4zgs" [320edd34-a3e5-49d9-ba43-7252508898df] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0129128s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (77.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 27.7464ms
addons_test.go:897: volcano-admission stabilized in 28.9211ms
addons_test.go:889: volcano-scheduler stabilized in 28.9211ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-c4zsh" [45a2dcc4-fb20-4bec-848e-b6f67ed91076] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0238082s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-jldgv" [c25423e0-1e9e-49c9-914b-e3c29a06d04b] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.0100205s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-z9tvf" [7f7cdfde-f730-464e-8c31-7e7d2fd0eeac] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0192396s
addons_test.go:924: (dbg) Run:  kubectl --context addons-975100 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-975100 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-975100 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9ffae459-8c00-41cf-b051-f75d0e83a731] Pending
helpers_test.go:344: "test-job-nginx-0" [9ffae459-8c00-41cf-b051-f75d0e83a731] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9ffae459-8c00-41cf-b051-f75d0e83a731] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 34.0201067s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-975100 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-975100 addons disable volcano --alsologtostderr -v=1: (26.3687489s)
--- PASS: TestAddons/parallel/Volcano (77.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-975100 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-975100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-975100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-975100: (41.4948735s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-975100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-975100: (5.2006221s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-975100
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-975100: (5.1432674s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-975100
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-975100: (2.700336s)
--- PASS: TestAddons/StoppedEnableDisable (54.54s)

                                                
                                    
x
+
TestErrorSpam/start (17.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run: (5.8070516s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run: (5.8320516s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 start --dry-run: (5.7888355s)
--- PASS: TestErrorSpam/start (17.43s)

                                                
                                    
x
+
TestErrorSpam/status (37.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status: (12.6917471s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status: (12.15789s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 status: (12.1934097s)
--- PASS: TestErrorSpam/status (37.05s)

                                                
                                    
x
+
TestErrorSpam/pause (23.06s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause: (7.9538892s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause: (7.6370935s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 pause: (7.4691639s)
--- PASS: TestErrorSpam/pause (23.06s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause: (7.7972619s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause: (7.9001581s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 unpause: (7.6359274s)
--- PASS: TestErrorSpam/unpause (23.34s)

                                                
                                    
x
+
TestErrorSpam/stop (57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop
E0603 12:40:14.713662   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 12:40:42.546006   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop: (34.9580635s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop: (11.22308s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-397300 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-397300 stop: (10.8162732s)
--- PASS: TestErrorSpam/stop (57.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\10544\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (208.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-808300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m28.0698958s)
--- PASS: TestFunctional/serial/StartWithProxy (208.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (128.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --alsologtostderr -v=8
E0603 12:45:14.727494   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-808300 --alsologtostderr -v=8: (2m8.7468299s)
functional_test.go:659: soft start took 2m8.7482422s for "functional-808300" cluster.
--- PASS: TestFunctional/serial/SoftStart (128.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-808300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.1: (9.0538908s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.3: (8.7468017s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:latest: (8.6235344s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-808300 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3928456186\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-808300 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3928456186\001: (2.2535984s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add minikube-local-cache-test:functional-808300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add minikube-local-cache-test:functional-808300: (8.3300768s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache delete minikube-local-cache-test:functional-808300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-808300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl images: (9.4647967s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.3632724s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.3907326s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 12:47:58.166923   12184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache reload: (8.2587269s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.50232s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 kubectl -- --context functional-808300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (168.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs: (2m48.6223863s)
--- PASS: TestFunctional/serial/LogsCmd (168.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (241.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd303037674\001\logs.txt
E0603 13:00:14.722918   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd303037674\001\logs.txt: (4m1.0154287s)
--- PASS: TestFunctional/serial/LogsFileCmd (241.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (18.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "echo hello": (9.655284s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "cat /etc/hostname": (9.2548439s)
--- PASS: TestFunctional/parallel/SSHCmd (18.91s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (54.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.8034248s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt": (9.9969638s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp functional-808300:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp functional-808300:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2662913280\001\cp-test.txt: (9.8183708s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt": (9.8142316s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.261235s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.6452195s)
--- PASS: TestFunctional/parallel/CpCmd (54.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10544/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/test/nested/copy/10544/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/test/nested/copy/10544/hosts": (9.4973574s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (58.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10544.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/10544.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/10544.pem": (9.5589588s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10544.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/10544.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/10544.pem": (9.7033489s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.7326229s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/105442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/105442.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/105442.pem": (9.7559763s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/105442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/105442.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/105442.pem": (9.9681267s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.8148741s)
--- PASS: TestFunctional/parallel/CertSync (58.54s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo systemctl is-active crio": exit status 1 (9.320242s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:04:53.719527    3496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.0330614s)
--- PASS: TestFunctional/parallel/License (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 version -o=json --components: (7.8039778s)
--- PASS: TestFunctional/parallel/Version/components (7.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9600: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 9724: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.5498352s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-808300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image rm gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image rm gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (1m0.3212555s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (1m0.2952067s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.8004931s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.984003s)
functional_test.go:1311: Took "9.9850427s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "235.0574ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.9396789s)
functional_test.go:1362: Took "9.9398324s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "236.9187ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (2.4509342s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (2.4836246s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (2.4768097s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-808300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image save --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image save --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (59.6089841s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-808300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-808300
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-808300: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "gcr.io/google-containers/addon-resizer:functional-808300" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-808300": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-808300
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-808300: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-808300": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-808300
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-808300: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-808300": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (702.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-149700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0603 13:23:37.326922   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.342739   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.359060   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.390785   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.438024   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.532666   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:37.707194   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:38.040433   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:38.692198   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:39.986413   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:42.554058   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:47.682694   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:23:57.929432   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:24:18.417466   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:24:57.946683   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 13:24:59.386394   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:25:14.737634   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 13:26:21.321102   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:28:37.325402   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:29:05.174911   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:30:14.736891   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 13:33:37.330608   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-149700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m4.9585554s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr: (37.1108601s)
--- PASS: TestMultiControlPlane/serial/StartCluster (702.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- rollout status deployment/busybox: (4.0478838s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- nslookup kubernetes.io: (1.9627458s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- nslookup kubernetes.io: (1.8064847s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-4hfj7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-fkkts -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-149700 -- exec busybox-fc5497c4f-vzbnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (279.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-149700 -v=7 --alsologtostderr
E0603 13:38:37.329674   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-149700 -v=7 --alsologtostderr: (3m49.7596036s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr
E0603 13:40:00.556638   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:40:14.753347   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 status -v=7 --alsologtostderr: (49.4273877s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (279.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-149700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.305751s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (647.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 status --output json -v=7 --alsologtostderr
E0603 13:41:37.960892   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 status --output json -v=7 --alsologtostderr: (49.4920449s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700:/home/docker/cp-test.txt: (9.8854761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt": (9.884487s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700.txt: (9.91926s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt": (9.7946694s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700_ha-149700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700_ha-149700-m02.txt: (16.8850381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt": (9.9136942s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m02.txt": (9.8924616s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700_ha-149700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700_ha-149700-m03.txt: (16.9372107s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt"
E0603 13:43:37.333679   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt": (9.7508492s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m03.txt": (9.7415499s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700_ha-149700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700_ha-149700-m04.txt: (16.8788069s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test.txt": (9.7827737s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700_ha-149700-m04.txt": (9.7350842s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m02:/home/docker/cp-test.txt: (9.8416386s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt": (9.8946028s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m02.txt: (9.8457561s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt": (9.7686621s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m02_ha-149700.txt
E0603 13:45:14.752286   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m02_ha-149700.txt: (17.2970962s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt": (9.8331097s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700.txt": (9.8143212s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700-m02_ha-149700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700-m02_ha-149700-m03.txt: (17.0478323s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt": (9.6510747s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700-m03.txt": (9.8615678s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700-m02_ha-149700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m02:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700-m02_ha-149700-m04.txt: (17.282875s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test.txt": (9.7548077s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700-m02_ha-149700-m04.txt": (9.7129082s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m03:/home/docker/cp-test.txt: (9.770639s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt": (9.7163702s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m03.txt: (9.9824376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt": (9.7185018s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m03_ha-149700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m03_ha-149700.txt: (17.1375396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt": (9.8519616s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700.txt": (9.9332473s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt: (17.2814967s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt"
E0603 13:48:37.334335   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt": (9.9536891s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700-m02.txt": (9.6975141s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m03:/home/docker/cp-test.txt ha-149700-m04:/home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt: (16.9774153s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test.txt": (9.8033365s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test_ha-149700-m03_ha-149700-m04.txt": (9.7944179s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp testdata\cp-test.txt ha-149700-m04:/home/docker/cp-test.txt: (9.8073426s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt": (9.6908646s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4159683526\001\cp-test_ha-149700-m04.txt: (9.8361757s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt": (9.8523831s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m04_ha-149700.txt
E0603 13:50:14.747997   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700:/home/docker/cp-test_ha-149700-m04_ha-149700.txt: (17.241394s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt": (9.9093229s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700.txt": (9.9702575s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700-m02:/home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt: (17.266585s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt": (9.7114645s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m02 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700-m02.txt": (9.7872932s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 cp ha-149700-m04:/home/docker/cp-test.txt ha-149700-m03:/home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt: (17.1699182s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m04 "sudo cat /home/docker/cp-test.txt": (9.7435884s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-149700 ssh -n ha-149700-m03 "sudo cat /home/docker/cp-test_ha-149700-m04_ha-149700-m03.txt": (9.8556778s)
--- PASS: TestMultiControlPlane/serial/CopyFile (647.59s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (198.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-921000 --driver=hyperv
E0603 13:56:40.568754   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 13:58:17.974526   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 13:58:37.342413   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-921000 --driver=hyperv: (3m18.268403s)
--- PASS: TestImageBuild/serial/Setup (198.27s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-921000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-921000: (9.6750374s)
--- PASS: TestImageBuild/serial/NormalBuild (9.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-921000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-921000: (9.0583693s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-921000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-921000: (7.9573733s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-921000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-921000: (7.6618948s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (242.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-985900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0603 14:03:37.351482   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-985900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m2.6064846s)
--- PASS: TestJSONOutput/start/Command (242.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.93s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-985900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-985900 --output=json --user=testUser: (7.9266935s)
--- PASS: TestJSONOutput/pause/Command (7.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-985900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-985900 --output=json --user=testUser: (7.9025574s)
--- PASS: TestJSONOutput/unpause/Command (7.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-985900 --output=json --user=testUser
E0603 14:05:14.756218   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-985900 --output=json --user=testUser: (35.3561022s)
--- PASS: TestJSONOutput/stop/Command (35.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.51s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-110700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-110700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (271.0564ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fab36274-89d0-4bbb-8de0-65b70b450fb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-110700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6285a08a-f505-4fe9-869d-a980cc2b4148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0eb965ee-979c-4827-a194-fb72ed1f1780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"58d4feba-c337-4766-a5db-29c210878a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2ea3b327-9805-4cc1-87e1-a6aa4757fb42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19011"}}
	{"specversion":"1.0","id":"69856704-a46b-4719-be71-7061fffd70da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"538fe62d-e58f-4015-9dfb-ae033d10e2cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:05:56.385809    9212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-110700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-110700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-110700: (1.2386156s)
--- PASS: TestErrorJSONOutput (1.51s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (529.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-911600 --driver=hyperv
E0603 14:08:37.357178   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-911600 --driver=hyperv: (3m17.2214024s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-911600 --driver=hyperv
E0603 14:10:14.761921   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-911600 --driver=hyperv: (3m19.6909251s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-911600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.6305002s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-911600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.2956927s)
helpers_test.go:175: Cleaning up "second-911600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-911600
E0603 14:13:20.587009   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 14:13:37.349882   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-911600: (45.9173449s)
helpers_test.go:175: Cleaning up "first-911600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-911600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-911600: (46.5733909s)
--- PASS: TestMinikubeProfile (529.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (156.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-773400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0603 14:14:57.996509   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 14:15:14.769907   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-773400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.4831407s)
--- PASS: TestMountStart/serial/StartWithMountFirst (156.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.68s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-773400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-773400 ssh -- ls /minikube-host: (9.676247s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (156.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-773400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0603 14:18:37.356072   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-773400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.8570486s)
--- PASS: TestMountStart/serial/StartWithMountSecond (156.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host
E0603 14:20:14.761408   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host: (9.5795652s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.58s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (28.18s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-773400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-773400 --alsologtostderr -v=5: (28.1839819s)
--- PASS: TestMountStart/serial/DeleteFirst (28.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host: (9.5595121s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.56s)

                                                
                                    
x
+
TestMountStart/serial/Stop (30.86s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-773400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-773400: (30.8604687s)
--- PASS: TestMountStart/serial/Stop (30.86s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (119.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-773400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-773400: (1m58.2773011s)
--- PASS: TestMountStart/serial/RestartStopped (119.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.77s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host
E0603 14:23:37.358998   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-773400 ssh -- ls /minikube-host: (9.7677804s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (424.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-720500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0603 14:25:14.771192   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
E0603 14:28:37.352853   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 14:30:00.600917   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 14:30:14.768031   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-720500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m40.4946397s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr: (24.3969492s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (424.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- rollout status deployment/busybox: (3.5722601s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- nslookup kubernetes.io: (2.1210621s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-mjhcf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-720500 -- exec busybox-fc5497c4f-n2t5d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (228.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-720500 -v 3 --alsologtostderr
E0603 14:33:37.357049   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 14:35:14.773693   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-720500 -v 3 --alsologtostderr: (3m12.3104989s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr: (36.0910946s)
--- PASS: TestMultiNode/serial/AddNode (228.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-720500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.969856s)
--- PASS: TestMultiNode/serial/ProfileList (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (366.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 status --output json --alsologtostderr: (36.239519s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500:/home/docker/cp-test.txt: (9.7235415s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt": (9.6002294s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500.txt: (9.5361296s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt": (9.5521314s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt multinode-720500-m02:/home/docker/cp-test_multinode-720500_multinode-720500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt multinode-720500-m02:/home/docker/cp-test_multinode-720500_multinode-720500-m02.txt: (16.4693287s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt": (9.5356889s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test_multinode-720500_multinode-720500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test_multinode-720500_multinode-720500-m02.txt": (9.6465525s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt multinode-720500-m03:/home/docker/cp-test_multinode-720500_multinode-720500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500:/home/docker/cp-test.txt multinode-720500-m03:/home/docker/cp-test_multinode-720500_multinode-720500-m03.txt: (16.8038702s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt"
E0603 14:38:37.362172   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test.txt": (9.5757533s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test_multinode-720500_multinode-720500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test_multinode-720500_multinode-720500-m03.txt": (9.5288953s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500-m02:/home/docker/cp-test.txt: (9.4646483s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt": (9.5578203s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m02.txt: (9.5219288s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt": (9.5687006s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt multinode-720500:/home/docker/cp-test_multinode-720500-m02_multinode-720500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt multinode-720500:/home/docker/cp-test_multinode-720500-m02_multinode-720500.txt: (16.6261865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt": (9.4910204s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test_multinode-720500-m02_multinode-720500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test_multinode-720500-m02_multinode-720500.txt": (9.4974607s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt multinode-720500-m03:/home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt
E0603 14:40:14.777062   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m02:/home/docker/cp-test.txt multinode-720500-m03:/home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt: (16.4909301s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test.txt": (9.5431946s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test_multinode-720500-m02_multinode-720500-m03.txt": (9.5768883s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp testdata\cp-test.txt multinode-720500-m03:/home/docker/cp-test.txt: (9.5754746s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt": (9.4876342s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3456099304\001\cp-test_multinode-720500-m03.txt: (9.6086806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt": (9.513866s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt multinode-720500:/home/docker/cp-test_multinode-720500-m03_multinode-720500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt multinode-720500:/home/docker/cp-test_multinode-720500-m03_multinode-720500.txt: (16.6102583s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt": (9.5012739s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test_multinode-720500-m03_multinode-720500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500 "sudo cat /home/docker/cp-test_multinode-720500-m03_multinode-720500.txt": (9.6485978s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt multinode-720500-m02:/home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 cp multinode-720500-m03:/home/docker/cp-test.txt multinode-720500-m02:/home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt: (17.0466773s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m03 "sudo cat /home/docker/cp-test.txt": (9.8050211s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 ssh -n multinode-720500-m02 "sudo cat /home/docker/cp-test_multinode-720500-m03_multinode-720500-m02.txt": (9.7172967s)
--- PASS: TestMultiNode/serial/CopyFile (366.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (77.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 node stop m03: (24.6914996s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-720500 status: exit status 7 (26.2239186s)

                                                
                                                
-- stdout --
	multinode-720500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-720500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-720500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:42:52.490055    6904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr
E0603 14:43:37.367821   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-720500 status --alsologtostderr: exit status 7 (26.3068014s)

                                                
                                                
-- stdout --
	multinode-720500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-720500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-720500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 14:43:18.707603    2632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 14:43:18.787412    2632 out.go:291] Setting OutFile to fd 1248 ...
	I0603 14:43:18.787771    2632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:43:18.787771    2632 out.go:304] Setting ErrFile to fd 1268...
	I0603 14:43:18.787771    2632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:43:18.800542    2632 out.go:298] Setting JSON to false
	I0603 14:43:18.800542    2632 mustload.go:65] Loading cluster: multinode-720500
	I0603 14:43:18.800542    2632 notify.go:220] Checking for updates...
	I0603 14:43:18.801787    2632 config.go:182] Loaded profile config "multinode-720500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 14:43:18.801787    2632 status.go:255] checking status of multinode-720500 ...
	I0603 14:43:18.802083    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:43:20.999472    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:20.999650    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:20.999650    2632 status.go:330] multinode-720500 host status = "Running" (err=<nil>)
	I0603 14:43:20.999776    2632 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:43:21.000678    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:43:23.169258    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:23.169258    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:23.169976    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:43:25.755215    2632 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:43:25.755215    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:25.755215    2632 host.go:66] Checking if "multinode-720500" exists ...
	I0603 14:43:25.767283    2632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 14:43:25.767283    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500 ).state
	I0603 14:43:27.916466    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:27.916466    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:27.917113    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500 ).networkadapters[0]).ipaddresses[0]
	I0603 14:43:30.526828    2632 main.go:141] libmachine: [stdout =====>] : 172.22.150.195
	
	I0603 14:43:30.527782    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:30.528064    2632 sshutil.go:53] new ssh client: &{IP:172.22.150.195 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500\id_rsa Username:docker}
	I0603 14:43:30.634795    2632 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8674734s)
	I0603 14:43:30.648908    2632 ssh_runner.go:195] Run: systemctl --version
	I0603 14:43:30.670185    2632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:43:30.695885    2632 kubeconfig.go:125] found "multinode-720500" server: "https://172.22.150.195:8443"
	I0603 14:43:30.695885    2632 api_server.go:166] Checking apiserver status ...
	I0603 14:43:30.706886    2632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 14:43:30.746651    2632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2048/cgroup
	W0603 14:43:30.766933    2632 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2048/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 14:43:30.779047    2632 ssh_runner.go:195] Run: ls
	I0603 14:43:30.786960    2632 api_server.go:253] Checking apiserver healthz at https://172.22.150.195:8443/healthz ...
	I0603 14:43:30.793957    2632 api_server.go:279] https://172.22.150.195:8443/healthz returned 200:
	ok
	I0603 14:43:30.793957    2632 status.go:422] multinode-720500 apiserver status = Running (err=<nil>)
	I0603 14:43:30.793957    2632 status.go:257] multinode-720500 status: &{Name:multinode-720500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 14:43:30.793957    2632 status.go:255] checking status of multinode-720500-m02 ...
	I0603 14:43:30.794956    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:43:32.963128    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:32.963128    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:32.963128    2632 status.go:330] multinode-720500-m02 host status = "Running" (err=<nil>)
	I0603 14:43:32.963128    2632 host.go:66] Checking if "multinode-720500-m02" exists ...
	I0603 14:43:32.963128    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:43:35.142743    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:35.143115    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:35.143115    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:43:37.754271    2632 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:43:37.754638    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:37.754638    2632 host.go:66] Checking if "multinode-720500-m02" exists ...
	I0603 14:43:37.767290    2632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 14:43:37.767290    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m02 ).state
	I0603 14:43:39.948834    2632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0603 14:43:39.949282    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:39.949282    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-720500-m02 ).networkadapters[0]).ipaddresses[0]
	I0603 14:43:42.568648    2632 main.go:141] libmachine: [stdout =====>] : 172.22.146.196
	
	I0603 14:43:42.568648    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:42.569647    2632 sshutil.go:53] new ssh client: &{IP:172.22.146.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-720500-m02\id_rsa Username:docker}
	I0603 14:43:42.671171    2632 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.903745s)
	I0603 14:43:42.684394    2632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 14:43:42.717544    2632 status.go:257] multinode-720500-m02 status: &{Name:multinode-720500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0603 14:43:42.717544    2632 status.go:255] checking status of multinode-720500-m03 ...
	I0603 14:43:42.718227    2632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-720500-m03 ).state
	I0603 14:43:44.886274    2632 main.go:141] libmachine: [stdout =====>] : Off
	
	I0603 14:43:44.886274    2632 main.go:141] libmachine: [stderr =====>] : 
	I0603 14:43:44.886274    2632 status.go:330] multinode-720500-m03 host status = "Stopped" (err=<nil>)
	I0603 14:43:44.886274    2632 status.go:343] host is not running, skipping remaining checks
	I0603 14:43:44.886274    2632 status.go:257] multinode-720500-m03 status: &{Name:multinode-720500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (77.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (185.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 node start m03 -v=7 --alsologtostderr
E0603 14:45:14.773307   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 node start m03 -v=7 --alsologtostderr: (2m29.2248771s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-720500 status -v=7 --alsologtostderr
E0603 14:46:40.610881   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-720500 status -v=7 --alsologtostderr: (36.0295923s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (185.44s)

                                                
                                    
x
+
TestPreload (527.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-515900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0603 14:58:37.375121   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 15:00:14.783615   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-515900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m30.5845882s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-515900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-515900 image pull gcr.io/k8s-minikube/busybox: (8.5522676s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-515900
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-515900: (40.2306169s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-515900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0603 15:03:20.630516   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 15:03:37.376399   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0603 15:04:58.045747   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-515900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m38.4827638s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-515900 image list
E0603 15:05:14.788692   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-515900 image list: (7.2461898s)
helpers_test.go:175: Cleaning up "test-preload-515900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-515900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-515900: (42.2842972s)
--- PASS: TestPreload (527.38s)

                                                
                                    
x
+
TestScheduledStopWindows (330.19s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-221500 --memory=2048 --driver=hyperv
E0603 15:08:37.383605   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-221500 --memory=2048 --driver=hyperv: (3m16.9066314s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-221500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-221500 --schedule 5m: (10.9012501s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-221500 -n scheduled-stop-221500
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-221500 -n scheduled-stop-221500: exit status 1 (10.0242507s)

                                                
                                                
** stderr ** 
	W0603 15:09:28.981948    6812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-221500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-221500 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5816399s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-221500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-221500 --schedule 5s: (10.7063355s)
E0603 15:10:14.795630   10544 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-975100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-221500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-221500: exit status 7 (2.4161405s)

                                                
                                                
-- stdout --
	scheduled-stop-221500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 15:10:59.304650   13732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-221500 -n scheduled-stop-221500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-221500 -n scheduled-stop-221500: exit status 7 (2.394706s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 15:11:01.724920   11320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-221500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-221500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-221500: (27.2424096s)
--- PASS: TestScheduledStopWindows (330.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-528900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-528900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (380.1248ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-528900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 15:11:31.378298    3600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    

Test skip (29/190)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-808300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-808300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 12808: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0339425s)

                                                
                                                
-- stdout --
	* [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:12:19.537059    9636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 13:12:19.618901    9636 out.go:291] Setting OutFile to fd 1116 ...
	I0603 13:12:19.619465    9636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:19.619565    9636 out.go:304] Setting ErrFile to fd 1120...
	I0603 13:12:19.619565    9636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:19.643133    9636 out.go:298] Setting JSON to false
	I0603 13:12:19.647505    9636 start.go:129] hostinfo: {"hostname":"minikube3","uptime":21268,"bootTime":1717399071,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 13:12:19.647609    9636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 13:12:19.653354    9636 out.go:177] * [functional-808300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 13:12:19.657898    9636 notify.go:220] Checking for updates...
	I0603 13:12:19.661133    9636 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:12:19.663655    9636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:12:19.665426    9636 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 13:12:19.668407    9636 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:12:19.670012    9636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:12:19.673904    9636 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:12:19.675311    9636 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.033566s)

                                                
                                                
-- stdout --
	* [functional-808300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0603 13:12:47.544039   14472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0603 13:12:47.615753   14472 out.go:291] Setting OutFile to fd 1160 ...
	I0603 13:12:47.616801   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:47.616801   14472 out.go:304] Setting ErrFile to fd 1088...
	I0603 13:12:47.616801   14472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:12:47.638177   14472 out.go:298] Setting JSON to false
	I0603 13:12:47.641582   14472 start.go:129] hostinfo: {"hostname":"minikube3","uptime":21296,"bootTime":1717399071,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0603 13:12:47.641747   14472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0603 13:12:47.645620   14472 out.go:177] * [functional-808300] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0603 13:12:47.648860   14472 notify.go:220] Checking for updates...
	I0603 13:12:47.651690   14472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0603 13:12:47.654234   14472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:12:47.657304   14472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0603 13:12:47.659891   14472 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:12:47.662074   14472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:12:47.665240   14472 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0603 13:12:47.666636   14472 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard